I built my Docker image and uploaded it to Amazon ECS (image repository).
I've written a deployment.yaml file and ran kubectl apply -f deployment.yaml.
Worth noting I've used kops to deploy the K8s cluster to AWS EC2
I can see the containers are running in Kubernetes pods using the Kubernetes Dashboard. Also kubectl get pods -o wide also shows me the pods.
The image I deployed is a simple API that exposes one route. My problem is that I have no idea how to query the container I just deployed.
Dockerfile of deployed image:
FROM node:lts
COPY package*.json tsconfig.json ./
RUN npm ci
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/index.js"]
deployment.yaml (kubectl apply -f deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: vuekcal
spec:
selector:
matchLabels:
app: vuekcal
replicas: 2
template:
metadata:
labels:
app: vuekcal
spec:
containers:
- name: search
image: [my-repo-id].dkr.ecr.eu-central-1.amazonaws.com/k8s-search
ports:
- containerPort: 3000
What I tried:
Run kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
vuekcal-6956589484-7f2kx 1/1 Running 0 16m 100.96.2.6 ip-172-20-54-21.eu-central-1.compute.internal <none> <none>
vuekcal-6956589484-f4pqf 1/1 Running 0 16m 100.96.1.7 ip-172-20-59-29.eu-central-1.compute.internal <none> <none>
If get and IP address from the IP column and try to curl it, nothing happens:
I assume this is because those IPs are local.
Finding the K8s node that is running my K8s pod with my container and trying to curl that node's public ip address
And same thing: no response.
Everything is fine if I run the container locally docker run k8s-search.
I have no idea what to do here. How do I query the image that deployment.yaml sets up inside a Kubernetes node?
To access the pod from outside the cluster you need to create either Nodeport or LoadBalancer type service.
kubectl expose deployment vuekcal --type=NodePort --name=example-service
Then access it via curl http://<public-node-ip>:<node-port>
!Make sure you ran the kubectl expose command above!
Public node IP
To get the public node IP, run the following command:
kubectl get nodes -o wide
and look at the "EXTERNAL-IP" column. This is the public ip of the node that is running your container. This is where you should try to connect. For example, the extrenal IP of your node could be 133.71.33.7. Remember this IP.
NodePort
It's different than the containerPort in your deployment.yaml.
To find the NodePort, run this command:
kubectl describe service example-service
Replace example-service with whatever you wrote in --name= when running kubectl expose deployment ... (first command in this post)
After you run the command, you'll see something like this:
This is the port you should use when connecting.
Putting it together
133.73.133.7:31110
Related
I'm running minikube using
minikube start --driver=docker
Then I use the followint sample commands to create and expose service
kubectl create deployment hello-minikube1 --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment hello-minikube1 --type=NodePort --port=8080
Problem
Command minikube service hello-minikube1 --url doesn't return a service url. Using <minikube ip>:<nodePort> also doesn't work - connections just stucks.
I tried creating pods with different images and still can't access external service for it
I am trying to figure out the networking in Kubernetes, and especially the handling of multicontainer pods. In my simple scenario, I have total of 3 pods. One has two containers in it and the other one has only one container which wants to communicate with a specific container in that multicontainer pod. I want to figure out how kubernetes handles the communication between such containers.
For this purpose I have simple multicontainer pod in a "sidecar architecture" the YAML file is as follows:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
- name: sidecar
image: curlimages/curl
command: ["/bin/sh"]
args: ["-c", "echo Hello from the sidecar container; sleep 300"]
ports:
- containerPort: 5000
What I want to achieve with this YAML file is that, in the pod "nginx", have two containers, one running nginx and listens on the port 80 of that pod the other running a simple curl image (anything different than nginx to not violate one container per pod convention of kubernetes) and can listen communication on pod's port 5000.
Then I have another YAML file again running an nginx image. This container is going to trying to communicate with the nginx and curl images on the other pod. The YAML file is as follows:
apiVersion: v1
kind: Pod
metadata:
name: nginx-simple
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
After deploying the pods I expose the nginx pod simply using the following command:
kubectl expose pods/nginx
Then I start a terminal inside the nginx-simple container.(Pod with one container). When I curl the ip address I get from kubectl get svc which is the service generated from my previous expose command with port 80, I can easily get the welcome message of nginx. However, the problem starts when I try to communicate with the curl container. When I curl the same ip adrress but this time with port 5000(containerPort I set in the Yaml file), I get a connection refused error. What am I doing wrong here?
Thanks in advance.
P.S: I would also be more than happy to hear your learning material suggestions for this topic. Thank you so much.
curl is a command line tool. It is not a server that is listening to a port, but a client tool that can be used to access servers.
This container does not contain a server that listen to a port:
- name: sidecar
image: curlimages/curl
command: ["/bin/sh"]
args: ["-c", "echo Hello from the sidecar container; sleep 300"]
ports:
- containerPort: 5000
Services deployed on Kubernetes are typically containers containing some form of webserver, but might be other kind of services as well.
shouldnt i at least be able to ping the curl container?
Nope, containers are not Virtual Machines. Containers typically only contain a single process and a container can only do what that process do. On Kubernetes these processes are typically webservers listening e.g. on port 8080, so commonly you can only check if they are alive by sending them an HTTP-request. See e.g. Configure Liveness, Readiness and Startup Probes.
When i run telnet pod-ip 5000 i cannot ping this curl container.
The curl binary is not a process that listen to any port. E.g. it cannot respond to ICMP. You can typically ping nodes but not containers. Curl is a http-client that typically is used to **send and http-request, wait for the http-response and then the process terminates. You can probably see this by inspecting the Pod, that the curl container has terminated.
I am trying to figure out how communication is handled in a multicontainer pod. Each pod has their own unique ip address and containers in the pod can use localhost. I get it but how a container in a different pod can target a specific container in a multicontainer pod?
I suggest that you add two webservers (e.g. two nginx containers) to a pod. But they have to listen to different ports, e.g. port 8080 and port 8081. A client can choose what container it want to interact with by using the Pod IP and the container Port, <Pod IP>:<containerPort>. E.g. add two nginx-containers, configure them to listen to different ports and let them serve different content.
Shortly, I use GOOGLE COMPUTE ENGINE (external IP: 34.73.89.55, all ports and protocols are opened), then I install Docker, minikube, kubectl. Then:
minikube start --driver=docker
minikube tunnel
kubectl create deployment hello-minikube1 --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment hello-minikube1 --type=LoadBalancer --port=8080
kubectl get svc
and I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube1 LoadBalancer 10.110.130.109 10.110.130.109 8080:31993/TCP 9m22s
My question is, why the EXTERNAL-IP did not match with the host's external IP: 34.73.89.55? How can I access this service remotely by the host's external IP (ex: I'm at home and access via browser)?
Ps: I would like to use GOOGLE COMPUTE ENGINE.
EDIT:
I also try:
sudo minikube start --driver=none
sudo kubectl create deployment hello-minikube1 --image=k8s.gcr.io/echoserver:1.4
sudo kubectl expose deployment hello-minikube1 --type=NodePort --port=8080
wget 127.0.0.1:8080
=>not work
By default minikube expects to run in a separate VM. This can be changed by explicitly specifying a driver.
Why the EXTERNAL-IP did not match with the host's external IP?
Because minikube uses a tunnel which creates a route to services deployed with type LoadBalancer and sets their Ingress to their ClusterIP. For a
detailed example see this documentation.
How can I access this service remotely by the host's external IP?
I see two options here:
More recommended: Set --driver=none
Minikube also supports a --driver=none option that runs the
Kubernetes components on the host and not in a VM. Using this driver
requires Docker and a Linux environment but not a hypervisor.
Might be less ideal: Use port forwarding (either using iptables or proxy). This might be less ideal.
Also remember that minikube was created for testing purposes on locahost. Keep that in mind while using it.
EDIT:
When going for --driver=none you can:
Use NodePort type instead of LoadBalancer.
Continue using Loadbalancer with a modified Service by adding:
spec:
externalIPs:
- <host_address>
For example:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: hello-minikube1
name: hello-minikube1
spec:
externalIPs:
- <host_address>
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: hello-minikube1
type: LoadBalancer
status:
loadBalancer: {}
The above was tested and resulted in EXTERNAL IP = HOST IP.
Please let me know if that helped.
I am using the local Kubernetes cluster from Docker Desktop on Windows 10. No virtual machines, no minikubes.
I need to expose a port on my localhost for some service.
For example, I take kubernetes-bootcamp image from the official tutorial:
docker pull jocatalin/kubernetes-bootcamp:v2
Put it in the local registry:
docker tag jocatalin/kubernetes-bootcamp:v2 localhost:5000/kubernetes-bootcamp
docker push localhost:5000/kubernetes-bootcamp
Then create a deployment with this image:
kubectl create deployment kubernetes-bootcamp --image=localhost:5000/kubernetes-bootcamp
Then let's expose a port for accessing our deployment:
kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
kubectl get services
kubernetes-bootcamp NodePort 10.102.167.98 <none> 8080:32645/TCP 8s
So we found out that the exposed port for our deployment is 32645. Let's try to request it:
curl localhost:32645
Failed to connect to localhost port 32645: Connection refused
And nothing is work.
But if I try port-forward everything is working:
kubectl port-forward deployment/kubernetes-bootcamp 7000:8080
Forwarding from 127.0.0.1:7000 -> 8080
Forwarding from [::1]:7000 -> 8080
Handling connection for 7000
Another console:
curl localhost:7000
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-7b5598d7b5-8qf9j | v=2
What am I doing wrong? I have found out several posts like mine, but none of them help me.
try to run the this CMD:
kubectl get svc | grep kubernetes-bootcamp
after this expose the pod to your network by using the CMD:
kubectl expose pod (podname) --type=NodePort
After that, you can check the URL by using the cmd example
minikube 0r kubectl service (service name) --url
So I have found out the problem root - local Kubernetes cluster somehow work the inappropriate way.
How I solve the problem:
Remove C:\ProgramData\DockerDesktop\pki
Recreate all pods, services, deployments
Now the same script I use before works great.
i am trying to deploy containers to local kubernetes, for now i have install docker deamon, minikube and minikube dashboard. this all are working fine. i had also setup local container repository on port 5000. i had also push 2 images of my application. i can see them on browser http://localhost:5000/v2/_catalog
now when i am trying to up pod using minikube.
kubectl apply -f ./docker-compose-k.yml --record
I am getting error on dashboard like this:-
Failed to pull image "localhost:5000/coremvc2": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
Here is my compose file:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: core23
labels:
app: codemvc
spec:
replicas: 1
selector:
matchLabels:
app: coremvc
template:
metadata:
labels:
app: coremvc
spec:
containers:
- name: coremvc
image: localhost:5000/coremvc2
ports:
- containerPort: 80
imagePullPolicy: Always
i don't know why this images are not pulled as docker deamon and kubernetes both are on same machine. i have also try this with dockerhub image and it's working fine, but i want to do this using local images.
please give me hint or any guideline.
Thank you,
Based on the comment, you started minikube with minikube start (without specifying the driver).
That means that the minikube is running inside a Virtualbox VM. In order to make your use case work, you have two choices :
The hard way Set-up the connection between you VM and your host and use your host IP
The easy way Connect to your VM using minikube ssh and install your registry there. Then your deployment should work with your VM's IP.
If you don't want to use Virtual box, you should read the documentation about other existing drivers and how to use them.
Hope this helps !
The issue is that you have setup docker registry on your host machine and minikube runs in virtualized environment (according to your example it is Virtualbox).
That is why you are receiving "connection refused" error upon attempt to fetch image on port 5000. The root cause is that there is no process "inside" minikube that listens on 5000. Your registry is deployed "outside" of minikube.
As Marc told, there are 2 ways to fix ita and I have reproduced both. The Hard way will get you to:
Failed to pull image "10.150.0.4:5000/my-alpine": rpc error: code = Unknown desc = Error response from daemon: Get https://10.150.0.4:5000/v2/: http: server gave HTTP response to HTTPS client
So you'll have to set-up secure Docker registry according to The Docker Documentation
The easy way is to set it up on top of your minikube.
my#linux-vm2:~$ minikube ssh
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
...
Status: Downloaded newer image for registry:2
$ docker pull nginx:latest
...
Status: Downloaded newer image for nginx:latest
$ docker tag alpine:latest localhost:5000/my-nginx
$ docker push localhost:5000/my-nginx
$ logout
my#linux-vm2:~$ kubectl apply -f ./docker-compose-k.yml --record
deployment.apps/core23 created
my#linux-vm2:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
core23-b44b794cb-vmhwl 1/1 Running 0 4s
my #linux-vm2:~$ kubectl describe pods
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/core23-b44b794cb-vmhwl to minikube
Normal Pulling 9s kubelet, minikube Pulling image "localhost:5000/my-nginx"
Normal Pulled 9s kubelet, minikube Successfully pulled image "localhost:5000/my-nginx"
Normal Created 9s kubelet, minikube Created container coremvc
Normal Started 9s kubelet, minikube Started container coremvc
Please note that I've been using nginx instead of coremvc2 in this example (but still steps are the same).
To sum it up, it is possible to achieve the result you need in a few different ways. Please try and let us know how it went. Cheers :)
localhost:5000 is pointing to the pod itself.
If the Docker registry is running on the same host where minikube is running:
Get the IP address of the host (e.g. using ifconfig). Say, it is 10.0.2.15.
Tag the image:
docker tag coremvc2:latest 10.0.2.15:5000/coremvc2:latest
Push the so-tagged image to the local registry:
docker push 10.0.2.15:5000/coremvc2:latest
Change in the Deployment:
image: localhost:5000/coremvc2
to
image: 10.0.2.15:5000/coremvc2:latest
EDIT: If getting "...http: server gave HTTP response to HTTPS client" error, you could configure the local Docker registry as insecure registry for the local Docker daemon by editing /etc/docker/daemon.json (create it if it doesn’t exist):
{
... any other settings or remove this line ...
"insecure-registries": ["10.0.2.15:5000"]
}
...then restart docker:
sudo service docker restart
Not sure how you run the local Docker registry, but this is one way:
docker run -d -p 5000:5000 --name registry registry:2