How can I communicate with a DB External to my Kubernetes Cluster - docker

Good afternoon, I have a question, I am new to Kubernetes and I need to connect to a DB that is outside of my cluster, I could only connect to the DB using the hostNetwork = true, however, this is not recommended, in this case there is a method to communicate with External DB?
I leave you the yaml that I am currently using, my pod contains one container that work with spring boot
apiVersion: apps/v1
kind: Deployment
metadata:
name: find-complementary-account-info
labels:
app: find-complementary-account-info
spec:
replicas: 2
selector:
matchLabels:
app: find-complementary-account-info
template:
metadata:
labels:
app: find-complementary-account-info
spec:
hostNetwork: true
dnsPolicy: Default
containers:
- name: find-complementary-account-info
image:find-complementary-account-info:latest
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "350Mi"
requests:
memory: "300Mi"
ports:
- containerPort: 8081
env:
- name: URL_CONNECTION_BD
value: jdbc:oracle:thin:#11.160.9.18:1558/DEFAULTSRV.WORLD
- name: USERNAME_CONNECTION_BD
valueFrom:
secretKeyRef:
name: credentials-bd-pers
key: user_pers
- name: PASSWORD_CONNECTION_BD
valueFrom:
secretKeyRef:
name: credentials-bd-pers
key: password_pers
key: password_pers
---
apiVersion: v1
kind: Service
metadata:
name: find-complementary-account-info
spec:
type: NodePort
selector:
app: find-complementary-account-info
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30020
Anyone have an idea how to communicate with external DB? This is not a cloud cluster, it is OnPremise

hostNetwork parameter is used for accessing pods from outside of the Cluster, you don't need that.
Pods from inside the Cluster can communicate externally because they are NATted. If not, something external prevent it, like a firewall or a missing routing.
The quicker way to find that is to ssh to one of your Kubernetes cluster nodes and try
telnet 11.160.9.18 1558
Anyway that IP address seems a Public one, so you have to check your company firewall imho

Related

GKE Kubernetes Ingress not routing traffic to microservices

I'm pretty new to Kubernetes and trying to deploy what I think is a pretty common use case onto a GKE cluster we have created with Terraform, microservices all hosted on one cluster, but cannot for the life of me get the routing to serve traffic to the correct services. The setup I'm trying to create is as follows:
Deployment & Service for each microservice (canvas-service and video-service)
Single Ingress (class GCE) on the cluster, hosted at a static IP, that routes traffic to each service based on path
The current config looks like this:
canvas-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: canvas-service-deployment
labels:
name: canvas-service
spec:
replicas: 2
selector:
matchLabels:
app: canvas-service
template:
metadata:
labels:
app: canvas-service
spec:
restartPolicy: Always
containers:
- name: canvas-service
image: gcr.io/emile-learning-dev/canvas-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: canvas-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: canvas-service
video-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: video-service-deployment
labels:
name: video-service
spec:
replicas: 2
selector:
matchLabels:
app: video-service
template:
metadata:
labels:
app: video-service
spec:
restartPolicy: Always
containers:
- name: video-service
image: gcr.io/emile-learning-dev/video-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: video-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: video-service
services-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: services-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "services-ip"
spec:
defaultBackend:
service:
name: canvas-service
port:
number: 80
rules:
- http:
paths:
- path: /canvas/*
pathType: ImplementationSpecific
backend:
service:
name: canvas-service
port:
number: 80
- path: /video/*
pathType: ImplementationSpecific
backend:
service:
name: video-service
port:
number: 80
The output of kubectl describe ingress services-ingress looks like this:
Name: services-ingress
Namespace: default
Address: 34.107.136.153
Default backend: canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
Rules:
Host Path Backends
---- ---- --------
*
/canvas/* canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
/video/* video-service:80 (10.244.1.15:8080,10.244.8.51:8080)
Annotations: ingress.kubernetes.io/backends: {"k8s-be-30551--8ba41a687ec15071":"HEALTHY","k8s-be-32145--8ba41a687ec15071":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/target-proxy: k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/url-map: k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1
kubernetes.io/ingress.global-static-ip-name: services-ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m36s loadbalancer-controller UrlMap "k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m34s loadbalancer-controller TargetProxy "k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m26s loadbalancer-controller ForwardingRule "k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal IPChanged 9m26s loadbalancer-controller IP is now 34.107.136.153
Normal Sync 6m56s (x5 over 11m) loadbalancer-controller Scheduled for sync
For testing, I have a healthcheck route at /health for each service. What I'm running into is that when I hit {public_ip}/health (using the default backend) I get the expected response. But when I hit {public_ip}/canvas/health or {public_ip}/video/health, I get a 404 Not Found.
I know it has something to do with the fact that the entire service route structure is on the /canvas or /video route, but thought that the /* was supposed to address exactly that. I'd like to basically make the root route for each service exist on the corresponding subpaths /canvas and /video. Would love to hear any thoughts you guys have as to what I'm doing wrong that's leading to traffic not being routed correctly.
If it's an issue with the GCP default Ingress resource or this isn't within its functionality, I'm totally open to using an nginx Ingress. But, I haven't been able to get an nginx Ingress to expose an IP at all so figured the GCP Ingress would probably be a shorter path to getting this cluster working. If I'm wrong about this also please let me know.
It's due to the paths defined in Ingress. Changing the path type to /video and /canvas will make this work.
To understand the reason behind this, you need to read nginx ingress controller documentation about path and pathType. You can also use the regex pattern in the path according to the documentation here https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/
Tip: When in doubt, you can always go to the nginx ingress controller pods and check the nginx.conf file about the location determined by nginx.

Can't access my local kubernetes service over the internet

Implementation Goal
Expose Zookeeper instance, running on kubernetes, to the internet.
(configuration & version information provided at the bottom)
Implementation Attempt
I currently have a minikube cluster running on ubuntu 14.04, backed by docker containers.
I'm running a bare metal k8s cluster, and I'm trrying to expose a zookeeper service to the internet. Seeing as my cluster is not running on a cloud provider, I set up metallb, in order to provide a network-loadbalancer implementation for my zookeeper service.
On startup everything looks good, an external IP is assigned and I can access it from the same host via a curl command.
$ kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-5c9894b5cd-9gh8m 1/1 Running 0 5h59m
speaker-j2z8q 1/1 Running 0 5h59m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.xxx.xxx.xxx <none> 443/TCP 6d19h
zk-cs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2181:30035/TCP 56m
zk-hs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2888:30664/TCP,3888:31113/TCP 6m15s
When I curl the above mentioned external IP's, I get a valid response
$ curl -D- "http://172.1.1.x:2181"
curl: (52) Empty reply from server
So far it all looks good, I can access the LB from outside the cluster with no issues, but this is where my lack of Kubernetes/Networking knowledge gets me.I'm finding it impossible to expose this LB to the internet. I've tried running minikube tunnel which I had high hopes for, only to be deeply disappointed.
Running a curl command from another node, whilst minikube tunnel is running will just see the request time out.
$ curl -D- "http://172.1.1.x:2181"
curl: (28) Failed to connect to 172.1.1.x port 2181: Timed out
At this point, as I mentioned before, I'm stuck.
Is there any way that I can get this service exposed to the internet without giving my soul to AWS or GCP?
Any help will be greatly appreciated.
Service Configuration
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
- name: zoo-config
mountPath: /conf
volumes:
- name: zoo-config
configMap:
name: zoo-config
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=10
syncLimit=4
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.1.1.1-172.1.1.10
minikube: v1.13.1
docker: 18.06.3-ce
You can do it with minikube, but the idea of minikube is just to test stuff on your local environment. So, by default, it does not have the correct IPTable permissions, and yes you can adjust that, but if your goal is only to use without any loud provider, I'll higly recommend you to use kubeadm (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
This tool will provide you a very customizable cluster configuration and you will be able to set your network problems without headaches.

Nginx ingress controller logs keeps telling me that i have wrong pod information

I am running two nodes in kubernetes cluster. I am able to deploy my microservices with 3 replicas, and its service. Now I am trying to have nginx ingress controller to expose my service but i am getting this error from the logs:
unexpected error obtaining pod information: unable to get POD information (missing POD_NAME or POD_NAMESPACE environment variable)
I have set a namespace of development in my cluster, that is where my microservice is deploy and also nginx controller. I do not understand how nginx picks up my pods or how i am passing pods name or pod_namespace.
here is my nginx controller:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-controller
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress
template:
metadata:
labels:
name: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.27.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
env:
- name: mycha-deploy
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
and here my deployment:
#dDeployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mycha-deploy
labels:
app: mycha-app
spec:
replicas: 3
selector:
matchLabels:
app: mycha-app
template:
metadata:
labels:
app: mycha-app
spec:
containers:
- name: mycha-container
image: us.gcr.io/##########/mycha-frontend_kubernetes_rrk8s
ports:
- containerPort: 80
thank you
Your nginx ingress controller deployment yaml looks incomplete and does not have below among many other items.
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
Follow the installation docs and use yamls from here
To expose your service using a Nginx Ingress, you need to configure it before.
Follow the installation guide for you kubernetes installation.
You also need a service to 'group' the containers of your application.
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector
...
For example, consider a stateless image-processing backend which is running with 3 replicas. Those replicas are fungible—frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that, nor should they need to keep track of the set of backends themselves.
The Service abstraction enables this decoupling.
As you can see, the service will discover your containers based on the label selector configured in your deployment.
To check the container's label selector: kubectl get pods -owide -l app=mycha-app
Service yaml
Apply the follow yaml to create a service for your deployment:
apiVersion: v1
kind: Service
metadata:
name: mycha-service
spec:
selector:
app: mycha-app <= This is the selector
ports:
- protocol: TCP
port: 8080
targetPort: 80
Check if the service is created with kubectl get svc.
Test the app using port-forwarding from your desktop at http://localhost:8080:
kubectl port-forward svc/mycha-service 8080:8080
nginx-ingress yaml
The last part is the nginx-ingress. Supposing your app has the url mycha-service.com and only the root '/' path:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-mycha-service
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mycha-service.com <= app url
http:
paths:
- path: /
backend:
serviceName: mycha-service <= Here you define what is the service that your ingress will use to send the requests.
servicePort: 80
Check the ingress: kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-mycha-service mycha-service.com XX.X.X.X 80 63s
Now you are able to reach your application using the url mycha-service.com and ADDRESS displayed by command above.
I hope it helps =)

Exposing Neo4j Bolt using Kubernetes Ingress

I'm trying to build a Neo4j Learning Tool for some of our Trainings. I want to use Kubernetes to spin up a Neo4j Pod for each participant to use. Currently I struggle exposing the bolt endpoint using an Ingress and I don't know why.
Here are my deployment configs:
apiVersion: apps/v1
kind: Deployment
metadata:
name: neo4j
namespace: learn
labels:
app: neo-manager
type: database
spec:
replicas: 1
selector:
matchLabels:
app: neo-manager
type: database
template:
metadata:
labels:
app: neo-manager
type: database
spec:
containers:
- name: neo4j
imagePullPolicy: IfNotPresent
image: neo4j:3.5.6
ports:
- containerPort: 7474
- containerPort: 7687
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: neo4j-service
namespace: learn
labels:
app: neo-manager
type: database
spec:
selector:
app: neo-manager
type: database
ports:
- port: 7687
targetPort: 7687
name: bolt
protocol: TCP
- port: 7474
targetPort: 7474
name: client
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neo4j-ingress
namespace: learn
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: learn.neo4j.com
http:
paths:
- path: /
backend:
serviceName: neo4j-service
servicePort: 7474
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: learn
data:
7687: "learn/neo4j-service:7687"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: learn
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.16
args:
- /nginx-ingress-controller
- --tcp-services-configmap=${POD_NAMESPACE}/tcp-services
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
The client gets exposed nicely and it reachable under learn.neo4j.com but I don't know where to point it to to connect to the DB using bolt. Whatever I try, it fails saying ServiceUnavailable: Websocket Connection failure (WebSocket network error: The operation couldn’t be completed. Connection refused in the console).
What am I missing?
The nginx-ingress-controller by default creates http(s) proxies only.
In your case you're trying to use a different protocol (bolt) so you need to configure your ingress controller in order for it to make a TCP proxy.
In order to do so, you need to create a configmap (in the nginx-ingress-controller namespace) similar to the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
7687: "<your neo4j namespace>/neo4j-service:7687"
Then, make sure your ingress controller has the following flag in its command:
--tcp-services-configmap tcp-services
This will make your nginx-ingress controller listen to port 7687 with a TCP proxy.
You can delete the neo4j-bolt-ingress Ingress, that's not going to be used.
Of course you have to ensure that the ingress controller correctly exposes the 7687 port the same way it does with ports 80 and 443, and possibly you'll have to adjust the settings of any firewall and load balancer you might have.
Source: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
It automatically tries to connect to port 7687 by default - if you enter the connection url http://learn.neo4j.bolt.com:80 (or https), it works.
I haven't used kubernetes ingress in this context before, but I think that when you use HTTP or HTTPS to connect to Neo4J, you still require external availability to connect to the the bolt port (7687). Does your setup allow for that?
Try using multi-path mapping in your Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neo4j-ingress
namespace: learn
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: learn.neo4j.com
http:
paths:
- path: /browser
backend:
serviceName: neo4j-service
servicePort: 7474
- path: /
backend:
serviceName: neo4j-service
servicePort: 7687
You should then be able to access the UI at learn.neo4j.com/browser. The bolt Connect URL would have to specified as:
bolt+s://learn.neo4j.com:443/

How to produce to kafka broker running inside container from outside the docker host?

I am trying to produce to a kafka broker which is running inside the container launched by kubernetes. I am playing with KAFKA_ADVERTISED_LISTENERES and KAFKA_LISTERNERS.
I tried setting these two env variables KAFKA_ADVERTISED_LISTENERES = PLAINTEXT://<host-ip>:9092 and KAFKA_LISTERNERS = PLAINTEXT://0.0.0.0:9092 and ran using docker-compose. And I was able to produce from an application out of the host machine.
But setting these two env-variables in Kubernetes.yml file, I get No broker list available exception.
What am I missing here?
Update:
kafka-pod.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: casb-deployment
name: kafkaservice
spec:
replicas: 1
template:
metadata:
labels:
app: kafkaservice
spec:
hostname: kafkaservice
#hostNetwork: true # to access docker out side of host container
containers:
- name: kafkaservice
imagePullPolicy: IfNotPresent
image: wurstmeister/kafka:1.1.0
env: # for production
- name: KAFKA_ADVERTISED_LISTENERES
value: "PLAINTEXT://<host-ip>:9092"
- name: KAFKA_LISTERNERS
value: "PLAINTEXT://0.0.0.0:9092"
- name: KAFKA_CREATE_TOPICS
value: "Topic1:1:1,Topic2:1:1"
- name: KAFKA_MESSAGE_TIMESTAMP_TYPE
value: "LogAppendTime"
- name: KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE
value: "LogAppendTime"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
ports:
- name: port9092
containerPort: 9092
---
apiVersion: v1
kind: Service
metadata:
namespace: casb-deployment
name: kafkaservice
labels:
app: kafkaservice
spec:
selector:
app: kafkaservice
ports:
- name: port9092
port: 9092
targetPort: 9092
protocol: TCP
I'm assuming you have a Kubernetes service, whose selector links the ingress flow to your Kafka Broker, that is exposing the nodePort (as opposed to clusterIP).
https://kubernetes.io/docs/concepts/services-networking/service/
So the kubernetes pod should be reachable through localhost:<nodePort>.
You can also set a Load Balancer in front of your Kubernetes cluster then you can just expose the k8s pods, i.e., allow external ingress.
Then the next step is to just leverage some DNS record so the outbound request produced by your docker-compose-based containers will go to DNS and then come back to your Kubernetes cluster through the load balancer.

Resources