how to change the port of a kubernetes container/pod? - docker

I am displaying the output of the "docker ps -a command" to list all the containers to my Html page. I want to change the port of these containers using a button in the page itself. In docker normally if the container is running, I would run a docker stop on the container-id and restart it by adding the -p HOSTPORT:CONTAINERPORT to the command. But since all the containers running are Kubernetes containers/pods, stopping them will re-create a new pod/container with a different name. So how do I change the port of the container/pod in such cases?
output of "docker ps -a command"
NAMES CONTAINER ID STATUS
k8s_nginx_nginx-6cdb6c86d4-z7m7m 56711e6de1be Up 2 seconds
k8s_POD_nginx-6cdb6c86d4-z7m7m_d 70b21761cb74 Up 3 seconds
k8s_coredns_coredns-5c98db65d4-7 dfb21bb7c7f4 Up 7 days
k8s_POD_coredns-5c98db65d4-7djs8 a336be8230ce Up 7 days
k8s_POD_kube-proxy-9722h_kube-sy 5e290420dec4 Up 7 days
k8s_POD_kube-apiserver-wootz_kub a23dea72b38b Exited (255) 7 days ago
nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- name: nginxport
port: 80
targetPort: 80
nodePort: 30000
selector:
app: nginx
tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
tier: frontend
template:
metadata:
labels:
app: nginx
tier: frontend
spec:
containers:
- image: suji165475/devops-sample:mxgraph
name: nginx
ports:
- containerPort: 80
name: nginxport
So how can I change the port of any of the containers/pod ?

Most of the attributes of a PodSpec cannot be changed once the pod has been created. The port information is inside the containers array, and the linked documentation explicitly notes that containers "Cannot be updated." You must delete and recreate the pod if you want to change the ports it makes visible (or most of its other properties); there is no other way to do it.
You almost never directly deal with Pods (and for that matter you almost never mix plain Docker containers and Kubernetes on the same host). Typically you create a Deployment object, which can be updated in place, and it takes responsibility for creating and deleting Pods for you.
(The corollary to this is that if you're trying to manually delete and recreate Pods, in isolation, changing their properties, but these Pods are also managed by Deployments or StatefulSets or DaemonSets, the controller will notice that a replica is missing when you delete it and recreate it, with its original settings.)

Answering OP's question, as per his comments.
I want to change the port on which my kubernetes containers run. I want to change the nodeport,container port,targetport for it. so how can do this using kubectl patch command for both the service and deployment?
kubectl patch deployment nginx --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/ports/0/containerPort", "new port"}]' && \
kubectl patch service nginx --type json -p='[{"op": "replace", "path": "/spec/type/spec/ports/0/targetPort", "new port"}]' && \
kubectl patch service nginx --type json -p='[{"op": "replace", "path": "/spec/type/spec/ports/0/nodePort", "new port"}]'
Here is how to change pod specs,
kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/ports/0/port", "value":"new port"}]'
As David said, Pods aren't really used directly without a deployment.
What you would normally do, have a deployment with deploys the pods and that configuration can be then edited using kubectl.
Try using something like this,
kubectl patch deployment valid-deployment --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/ports/0/port", "new port"}]'
If you patch the deployment, the pods automatically restart.
That being said, if you change the port of the container, the service targetport would have to be changed too. The simple fix for that would to make sure all your container ports have name attribute filled which are mapped to their appropriate k8s services.

Related

App not rendering on browser after running services and pods

Problem Facing: When I try to run kubectl apply command on both the files below and try to see the app in the browser in http://192.168.49.2:30080/ the app did not render.I tried to run minikube service fleetman - webapp --url but still no progress . Please Help !!!
Additional information :minikube ip -192.168.49.2 .
Note:I have installed docker Desktop app on my mac book air catalina.
Browser message: This site can’t be reached 192.168.49.2 took too long to respond.
Docker image Link :https://hub.docker.com/r/richardchesterwood/k8s-fleetman-webapp-angular
first-pod.yaml file
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels :
mylabelname: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
webapp-services.yaml file
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
mylabelname: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
Try creating minikube with driver none:
$ minikube start --driver=none
The none driver allows advanced minikube users to skip VM creation, allowing minikube to be run on a user-supplied VM.
Hence you will be able to communicate to your app via your host (ie. user-supplied VM) network address.

How to expose low-numbered ports in the kubernetes mini-cluster that comes with Docker Desktop

I'm using the kubernetes cluster built in to Docker Desktop to develop my application.
I would like to expose services inside the cluster as ports on localhost.
I can do so using kubectl expose deployment foobar --type=NodePort --port=30088, which creates a service like this:
apiVersion: v1
kind: Service
metadata:
labels:
role: web
name: foobar
spec:
externalTrafficPolicy: Cluster
ports:
- nodePort: 30088
port: 80
protocol: TCP
targetPort: 80
selector:
role: web
type: NodePort
But it only works for very high numbered ports. If I try something lower I get:
The Service "kafka-external" is invalid: spec.ports[0].nodePort: Invalid value: 9092: provided port is not in the valid range. The range of valid ports is 30000-32767
It seems there is a kubernetes apiserver setting called ServiceNodePortRange which would allow me to override this restriction, but I can't figure out how to set it on Docker's builtin cluster.
So my question is: how do I expose a specific, low-numbered port (like 9092) on Docker's kubernetes cluster? Is there a way to override that setting? Or a better way to expose the service than NodePort?
NodePort is intended to be a building block for load-balancers or other
ingress modes. This means it didn't matter which port you got as long as
you got one. This makes it a little clunky to use directly - you can't
have just any port. You can change the port range, but you run the risk of
conflicts with real things running on your nodes and with any pod HostPorts.
The default range is indeed 30000-32767 but it can be changed by setting the --service-node-port-range Update the file /etc/kubernetes/manifests/kube-apiserver.yaml and add the line --service-node-port-range=xxxxx-yyyyy.
In the Kubernetes cluster there is a kube-apiserver.yaml file which is in the directory - /etc/kubernetes/manifests/kube-apiserver.yaml but not on the kube-apiserver container/pod but on the master itself.
Login to Docker VM:
Add the following line to the pod spec:
spec:
containers:
- command:
- kube-apiserver
...
- --service-node-port-range=xxxxx-yyyyy # <-- add this line
...
Save and exit. Pod kube-apiserver will be restarted with new parameters.
Exit Docker VM (for screen: Ctrl-a,k , for container: Ctrl-d )
Check the results:
$ kubectl get pod kube-apiserver-docker-desktop -o yaml -n kube-system | less
Take a look: service-pod-range, changing pod range, changing-nodeport-range.

minikube how to connect from one pod to another using hostnames?

I am running a cluster in default namespace with all the pods in Running state.
I have an issue, I am trying to telnet from one pod to another pod using the pod hostname 'abcd-7988b76669-lgp8l' but I am not able to connect. although it works if I use pods internal ip. Why does the dns is not resolved?
I looked at
kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-5lpfd 1/1 Running 0 12h
coredns-6955765f44-9cvnb 1/1 Running 0 12h
Anybody has any idea how to connect from one pod to another using hostname resolution ?
First of all it is worth mentioning that typically you won't connect to individual Pods using their domain names. One good reason for that is their ephemeral nature. Note that typically you don't create plain Pods but controller such as Deployment which manages your Pods and ensures that specific number of Pods of a certain kind is constantly up and running. Pods may be often deleted and recreated hence you should never rely on their domain names in your applications. Typically you will expose them to another apps e.g. running in other Pods via Service.
Although using invididual Pod's domain name is not recommended, it is still possible. You can do it just for fun or learning/experimenting purposes.
As #David already mentioned you would help us much more in providing you a comprehensive answer if you EDIT your question and provide a few important details, showing what you've tried already such as your Pods and Services definitions in yaml format.
Answering literally to your question posted in the title:
minikube how to connect from one pod to another using hostnames?
You won't be able to connect to a Pod using simply its hostname. You can e.g. ping your backend Pods exposed via ClusterIP Service by simply pinging the <service-name> (provided it is in the same namespace as the Pod your pinging from).
Keep in mind however that it doesn't work for Pods - neither Pods names nor their hostnames are resolvable by cluster DNS.
You should be able to connect to an individual Pod using its fully quallified domain name (FQDN) provided you have configured everything properly. Just make sure you didn't overlook any of the steps described here:
Make sure you've created a simple Headless Service which may look like this:
apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: busybox
clusterIP: None
Make sure that your Pods definitions didn't lack any important details:
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
name: busybox
spec:
hostname: busybox-1
subdomain: default-subdomain
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox
---
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
name: busybox
spec:
hostname: busybox-2
subdomain: default-subdomain
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox
Speaking about important details, pay special attention that you correctly defined hostname and subdomain in Pod specification and that labels used by Pods match the labels used by Service's selector.
Once everything is configured properly you will be able to attach to Pod busybox1 and ping Pod busybox2 by using its FQDN like in the example below:
$ kubectl exec -ti busybox1 -- /bin/sh
/ # ping busybox-2.default-subdomain.default.svc.cluster.local
PING busybox-2.default-subdomain.default.svc.cluster.local (10.16.0.109): 56 data bytes
64 bytes from 10.16.0.109: seq=0 ttl=64 time=0.051 ms
64 bytes from 10.16.0.109: seq=1 ttl=64 time=0.082 ms
64 bytes from 10.16.0.109: seq=2 ttl=64 time=0.081 ms
I hope this helps.

Exposing Nginx container and View the Service

1.)
Execute the following command to generate a random number which is used in the later steps
NUMBER=$[ ( $RANDOM % 1000 ) + 1 ]
echo $NUMBER
Note: Replace the sentence your random number with the number that you have generated wherever you have found the sentence.
Your task is to start a Kubernetes Engine managed by Kubernetes Cluster with the name mycluster-your random number and configure it to run 2 nodes.
2.)
Run and Deploy a Container
Here, you need to launch a single instance of the Nginx container (with version 1.10.0) from the cloud shell.
Execute the following command to view the pod that is running in the nginx container.**
3.)
First, you need to expose the Nginx container to the internet.
Kubernetes will create a service with an external load balancer with a public IP address. You can view your service by executing the following command.
kubectl get services
Now, you will get the external IP address of the Nginx cluster. Open the new web browser tab and paste the Cluster External IP address. You should get the default home page of the Nginx browser.
I have used the below code so far, but the lb is not working:
gcloud container clusters create mycluster-5 --zone=us-central1-a
kubectl create deployment mycluster --image=gcr.io/cloud-marketplace/google/nginx1
kubectl set image deployment nginx nginx=nginx:1.9.1
kubectl expose deployment mycluster-727 --type LoadBalancer --port 80 --target-port 8080
service/mycluster-727 exposed
The reason it's not working because the port is not exposed by the Pod. Please run the below command instead of the second command.
kubectl run mycluster --image=gcr.io/cloud-marketplace/google/nginx1 --port=80
This command should create the deployment and exposed the containerPort on 80 as well which your service would be able to hit.
Welcome to Stack Overflow!
The commands you've posted are not working because you have a typo and the containers ports don't match.
Problem explanation:
Here you are creating a new deployment named mycluster but you are not exposing any port.
kubectl create deployment mycluster --image=gcr.io/cloud-marketplace/google/nginx1
Here you are exposing a deployment named mycluster-727 on port 80 and to target port 8080:
kubectl expose deployment mycluster-727 --type LoadBalancer --port 80 --target-port 8080
Here you are setting image on differents deployment nginx and with a different version that was asked 1.10.0:
kubectl set image deployment nginx nginx=nginx:1.9.1
Fixing the problem
I've checked, and the images gcr.io/cloud-marketplace/google/nginx1 and nginx:1.10.0 and both of them use port 80 to expose the application, so instead use --targer-port=8080 you need use port 80, but you also need to expose the container por when creating the deployment.
Based on #nischay goayl answer, the following command will create a deployment and expose on port 80:
kubectl run mycluster --image=nginx:1.10.0 --port=80
Then, create a service exposing the application:
kubectl expose deployment mycluster --type LoadBalancer --port 80 --target-port 80
Wait for EXTERANL-IP and try to reach your application.
If you want to test internally, use a test pod with curl image to reach the service:
apiVersion: v1
kind: Pod
metadata:
name: curl
namespace: default
spec:
containers:
- name: curl
image: curlimages/curl
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
And then use the command:
kubectl exec -it curl -- curl -IL http://mycluster
response:
HTTP/1.1 200 OK
Server: nginx/1.10.0
Date: Mon, 30 Mar 2020 09:30:07 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 26 Apr 2016 15:17:57 GMT
Connection: keep-alive
ETag: "571f86a5-264"
Accept-Ranges: bytes

container labels in kubernetes

I am building my docker image with jenkins using:
docker build --build-arg VCS_REF=$GIT_COMMIT \
--build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \
--build-arg BUILD_NUMBER=$BUILD_NUMBER -t $IMAGE_NAME\
I was using Docker but I am migrating to k8.
With docker I could access those labels via:
docker inspect --format "{{ index .Config.Labels \"$label\"}}" $container
How can I access those labels with Kubernetes ?
I am aware about adding those labels in .Metadata.labels of my yaml files but I don't like it that much because
- it links those information to the deployment and not the container itself
- can be modified anytime
...
kubectl describe pods
Thank you
Kubernetes doesn't expose that data. If it did, it would be part of the PodStatus API object (and its embedded ContainerStatus), which is one part of the Pod data that would get dumped out by kubectl get pod deployment-name-12345-abcde -o yaml.
You might consider encoding some of that data in the Docker image tag; for instance, if the CI system is building a tagged commit then use the source control tag name as the image tag, otherwise use a commit hash or sequence number. Another typical path is to use a deployment manager like Helm as the principal source of truth about deployments, and if you do that there can be a path from your CD system to Helm to Kubernetes that can pass along labels or annotations. You can also often set up software to know its own build date and source control commit ID at build time, and then expose that information via an informational-only API (like an HTTP GET /_version call or some such).
I'll add another option.
I would suggest reading about the Recommended Labels by K8S:
Key Description
app.kubernetes.io/name The name of the application
app.kubernetes.io/instance A unique name identifying the instance of an application
app.kubernetes.io/version The current version of the application (e.g., a semantic version, revision hash, etc.)
app.kubernetes.io/component The component within the architecture
app.kubernetes.io/part-of The name of a higher level application this one is part of
app.kubernetes.io/managed-by The tool being used to manage the operation of an application
So you can use the labels to describe a pod:
apiVersion: apps/v1
kind: Pod # Or via Deployment
metadata:
labels:
app.kubernetes.io/name: wordpress
app.kubernetes.io/instance: wordpress-abcxzy
app.kubernetes.io/version: "4.9.4"
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: server
app.kubernetes.io/part-of: wordpress
And use the downward api (which works in a similar way to reflection in programming languages).
There are two ways to expose Pod and Container fields to a running Container:
1 ) Environment variables.
2 ) Volume Files.
Below is an example for using volumes files:
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
version: 4.5.6
component: database
part-of: etl-engine
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox
command: ["sh", "-c"]
args: # < ------ We're using the mounted volumes inside the container
- while true; do
if [[ -e /etc/podinfo/labels ]]; then
echo -en '\n\n'; cat /etc/podinfo/labels; fi;
if [[ -e /etc/podinfo/annotations ]]; then
echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
sleep 5;
done;
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
volumes: # < -------- We're mounting in our example the pod's labels and annotations
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
Notice that in the example we accessed the labels and annotations that were passed and mounted to the /etc/podinfo path.
Beside labels and annotations, the downward API exposed multiple additional options like:
The pod's IP address.
The pod's service account name.
The node's name and IP.
A Container's CPU limit , CPU request , memory limit, memory request.
See full list in here.
(*) A nice blog discussing the downward API.
(**) You can view all your pods labels with
$ kubectl get pods --show-labels
NAME ... LABELS
my-app-xxx-aaa pod-template-hash=...,run=my-app
my-app-xxx-bbb pod-template-hash=...,run=my-app
my-app-xxx-ccc pod-template-hash=...,run=my-app
fluentd-8ft5r app=fluentd,controller-revision-hash=...,pod-template-generation=2
fluentd-fl459 app=fluentd,controller-revision-hash=...,pod-template-generation=2
kibana-xyz-adty4f app=kibana,pod-template-hash=...
recurrent-tasks-executor-xaybyzr-13456 pod-template-hash=...,run=recurrent-tasks-executor
serviceproxy-1356yh6-2mkrw app=serviceproxy,pod-template-hash=...
Or viewing only specific label with $ kubectl get pods -L <label_name>.

Resources