Kubernetes container ports setup similar to docker-compose? - docker

I'm having trouble setting up my k8s pods exactly how I want. My trouble is that I have multiple containers which listen to the same ports (80,443). In a remote machine, I normally use docker-compose with 'ports - 12345:80' to set this up. With K8s it appears from all of the examples I have found that with a container, the only option is to expose a port, not to proxy it. I know I can use reverse proxies to forward to multiple ports, but that would require the images to use different ports rather than using the same port and having the container forward the requests. Is there a way to do this in k8s?
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
loadBalancerIP: xxx.xxx.xxx.xxx
selector:
app: app
tier: backend
ports:
- protocol: "TCP"
port: 80
targetPort: 80
type: LoadBalancer
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
selector:
matchLabels:
app: app
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: app
tier: backend
track: stable
spec:
containers:
- name: app
image: image:example
ports:
- containerPort: 80
imagePullSecrets:
- name: xxxxxxx
Ideally, I would be able to have the containers on a Node listening to different ports, which the applications running in those containers continue to listen to 80/443, and my services would route to the correct container as necessary.
My load balancer is working correctly, as is my first container. Adding a second container succeeds, but the second container can't be reached. The second container uses a similar script with different names and a different image for deployment.

The answer here is adding a service for the pod where the ports are declared. Using Kompose to convert a docker-compose file, this is the result:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: pathToKompose.exe convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
ports:
- name: "5000"
port: 5000
targetPort: 80
selector:
io.kompose.service: app
status:
loadBalancer: {}
as well as
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: pathToKompose.exe convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: app
strategy: {}
template:
metadata:
annotations:
kompose.cmd: pathToKompose.exe convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: app
spec:
containers:
- image: image:example
imagePullPolicy: ""
name: app
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
Some of the fluff from Kompose could be removed, but the relevant answer to this question is declaring the port and target port for the pod in a service, and exposing the targetPort as a containerPort in the deployment for the container.
Thanks to David Maze and GintsGints for the help!

Related

Minikube Ingress Controller (with custom domain) Not Loading

I'm trying to run a VueJS app on my local machine with minikube and Kubernetes.
Now,
I applied the YAML file.
Added the IP address of the ingress to /etc/hosts on my MacOS (M1).
neu.com does not load, nor does the IP address of the ingress controller.
What I've tried
Tried running the service with a nodeport (it loads up fine)
Removing everything, and doing the whole thing from start
(minikube addon ingress is switched on)
Here is the version info for all the tools I'm using.
kubectl version --short
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.24.1
-------------------------
minikube version
minikube version: v1.26.0
commit: f4b412861bb746be73053c9f6d2895f12cf78565
And this is the YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: cleo
labels:
app: cleo
spec:
replicas: 1
selector:
matchLabels:
app: cleo
template:
metadata:
labels:
app: cleo
spec:
containers:
- name: cleo
image: image-name-of-my-vuejs-app
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: cleo-service
spec:
selector:
app: cleo
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cleo-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: neu.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: cleo-service
port:
number: 80
The service and ingress are running:
Can somebody see what the problem is?
The direct access only works on Linux, the docker network is not accessible on macOS or on Windows
https://docs.docker.com/desktop/mac/networking/#known-limitations-use-cases-and-workarounds
Reference: https://github.com/kubernetes/minikube/issues/13951

Problem deploying golang app to kubernetes

So I followed this tutorial that explains how to building containerized microservices in Golang, Dockerize and Deploy to Kubernetes.
https://www.youtube.com/watch?v=H6pF2Swqrko
I got to the point that I can access my app via the minikube ip (mine is 192.168.59.100).
I set up kubernetes, I currently have 3 working pods but I can not open my golang app through kubernetes with the url that the kubectl shows me: "192.168.59.100:31705..."
^
|
here
I have a lead...
when i search "https://192.168.59.100:8443/" error 403 comes up:
Here is my deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
labels:
app: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: go-web-app
image: go-app-ms:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
Here is my service.yml:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: web
ports:
- port: 80
targetPort: 80
Your service's selector tries to match pods with label: app.kubernetes.io/name: web, but pods have app: web label. They do not match. The selector on service must match labels on pods. As you use deployment object, this means the same labels as in spec.template.metadata.labels.
#Szczad has correctly described the problem. I wanted to suggest a way of avoiding that problem in the future. Kustomize is a tool for building Kubernetes manifests. It is built into the kubectl command. One of its features is the ability to apply a set of common labels to your resources, including correctly filling in selectors in services and deployments.
If we simplify your Deployment to this (in deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 3
template:
spec:
containers:
- name: go-web-app
image: go-app-ms:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
And your Service to this (in service.yaml):
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
And we place the following kustomization.yaml in the same directory:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: web
resources:
- deployment.yaml
- service.yaml
Then we can deploy this application by running:
kubectl apply -k .
And this will result in the following manifests:
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web-service
spec:
ports:
- port: 80
targetPort: 80
selector:
app: web
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: go-app-ms:latest
imagePullPolicy: IfNotPresent
name: go-web-app
ports:
- containerPort: 80
As you can see here, the app: web label has been applied to the deployment, to the deployment selector, to the pod template, and to the service selector.
Applying the labels through Kustomize like this means that you only need to change the label in one place. It makes it easier to avoid problems caused by label mismatches.

Connect to mongodb with mongoose both in kubernetes

I have a microservice which I developed and tested using docker-compose. Now I would like to deploy it to kubernetes.
Part of my docker-compose file looks like this:
tasksdb:
container_name: tasks-db
image: mongo:4.4.1
restart: always
ports:
- '6004:27017'
volumes:
- ./tasks_service/tasks_db:/data/db
networks:
- backend
tasks-service:
container_name: tasks-service
build: ./tasks_service
restart: always
ports:
- "5004:3000"
volumes:
- ./tasks_service/logs:/usr/src/app/logs
- ./tasks_service/tasks_attachments/:/usr/src/app/tasks_attachments
depends_on:
- tasksdb
networks:
- backend
I used mongoose to connect to the database and it worked fine:
const connection = "mongodb://tasks-db:27017/tasks";
const connectDb = () => {
mongoose.connect(connection, {useNewUrlParser:true, useCreateIndex:true, useFindAndModify: false});
return mongoose.connect(connection);
};
Utilizing Kompose, I created a deployment file however I had to modify the persistent volume and persistent volume claim accordingly.
I have something like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: tasks-volume
labels:
type: local
spec:
storageClassName: manual
volumeMode: Filesystem
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.60.50
path: /tasks_db
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tasksdb-claim0
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
I changed the mongourl as shown here like this:
const connection = "mongodb://tasksdb.default.svc.cluster.local:27017/tasks";
My deployment looks like this:
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasks-service
name: tasks-service
spec:
ports:
- name: "5004"
port: 5004
targetPort: 3000
selector:
io.kompose.service: tasks-service
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
name: tasksdb
spec:
ports:
- name: "6004"
port: 6004
targetPort: 27017
selector:
io.kompose.service: tasksdb
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasks-service
name: tasks-service
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: tasks-service
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasks-service
spec:
containers:
- image: 192.168.60.50:5000/blascal_tasks-service
name: tasks-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
restartPolicy: Always
status: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
name: tasksdb
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: tasksdb
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
spec:
containers:
- image: mongo:4.4.1
name: tasks-db
ports:
- containerPort: 27017
resources: {}
volumeMounts:
- mountPath: /data/db
name: tasksdb-claim0
restartPolicy: Always
volumes:
- name: tasksdb-claim0
persistentVolumeClaim:
claimName: tasksdb-claim0
status: {}
Having several services I added an ingress resource for my routing:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: tasks-service
servicePort: 5004
The deployment seems to run fine as you can see .
However, I have three issues:
Despite the fact that I can hit my default path which just reads "tasks service is up" I cannot access my mongoose routes like /api/task/raise which connects to the db, it says "..buffering timed out" like . I guess, the path does not link up to the database service?
the tasks service pod gives this
Whenever there is a power surge and my machine goes off, bringing up the db deployment fails until I delete the config files from the persistent volume, how do I prevent this corruption of files?
I have been researching in an elaborate way of changing the master ip of my cluster as I intend to transfer my cluster to a different network. Any guidance please?
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
the above gives this :
Your tasksdb Service exposes port 6004, not 27017. Try using the following URL:
const connection = "mongodb://tasksdb.default.svc.cluster.local:6004/tasks";
Changing your network depends on what networking CNI plugin you are using. Every plugin has different steps . For Calico please see https://docs.projectcalico.org/networking/migrate-pools
I believe this is your cluster ip setting for mongodb instance:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
name: tasksdb
spec:
ports:
- name: "6004"
port: 6004
targetPort: 27017
selector:
io.kompose.service: tasksdb
status:
loadBalancer: {}
When you create an instance of mongodb inside kubernetes, it runs inside a pod. To connect to a pod, we have to go through the cluster IP service. Anytime we are trying to connect to a cluster IP service we are going to write the name of that cluster iP service for the domain of connection url. in this case you connection url must be
mongodb://tasksdb:6004/nameOfDatabase

EKS help needed

Need some basic help with EKS. Not sure what I am doing wrong.
I have a java springboot application as a docker container in ECR.
I created a simple deployment script
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-deployment
labels:
app: java-microservice
spec:
replicas: 2
selector:
matchLabels:
app: java-microservice
template:
metadata:
labels:
app: java-microservice
spec:
containers:
- name: java-microservice-container
image: xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/yyyyyyy
ports:
- containerPort: 80
I created a loadbalancer to expose this outside
loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: java-microservice-service
spec:
type: LoadBalancer
selector:
app: java-microservice
ports:
- protocol: TCP
port: 80
targetPort: 80
The pods got created. I see they are running
When I do kubectl get service java-microservice-service, I do see the loadbalancer is running
When I go to browser and try to access the application via http://loadbalancer-address, I cannot reach it.
What am I missing? How do I go about debugging this?
thanks in advance
ok. so i changed the port in my yaml files to 8080 and it seems to be working fine.

Kubernetes (Minikube): environment variable

I'm running a simple spring microservice project with Minikube. I have two projects: lucky-word-client (on port 8080) and lucky-word-server (on port 8888). lucky-word-client has to communicate with lucky-word-server. I want to inject the static Nodeport of lucky-word-server (http://192.*..100:32002) as an environment variable in my Kuberenetes deployment script of lucky-word-client. How I could do?
This is deployment of lucky-word-server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-server
spec:
selector:
matchLabels:
app: lucky-server
replicas: 1
template:
metadata:
labels:
app: lucky-server
spec:
containers:
- name: lucky-server
image: lucky-server-img
imagePullPolicy: Never
ports:
- containerPort: 8888
This is the service of lucky-word-server:
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
targetPort: 8888
port: 80
nodePort: 32002
type: NodePort
This is the deployment of lucky-word-client:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-client
spec:
selector:
matchLabels:
app: lucky-client
replicas: 1
template:
metadata:
labels:
app: lucky-client
spec:
containers:
- name: lucky-client
image: lucky-client-img
imagePullPolicy: Never
ports:
- containerPort: 8080
This is the service of lucky-word-client:
kind: Service
apiVersion: v1
metadata:
name: lucky-client
spec:
selector:
app: lucky-client
ports:
- protocol: TCP
targetPort: 8080
port: 80
type: NodePort
Kubernetes automatically injects services as environment variables. https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
But you should not use this. This won't work unless all the services are in place when you create the pod. It is inspired by "docker" which also moved on to DNS based service discovery now. So "environment based service discovery" is a thing of the past.
Please rely on DNS service discovery. Minikube ships with kube-dns so you can just use the lucky-server hostname (or one of lucky-server[.default[.svc[.cluster[.local]]]] names). Read the documentation: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

Resources