I have a local kubernetes cluster setup using the edge release of docker (mac). My pods use an env var that I've defined to be my DB's url. These env vars are defined in a config map as:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
DB_URL: postgres://user#localhost/my_dev_db?sslmode=disable
What should I be using here instead of localhost? I need this env var to point to my local dev machine.
You can use the private lan address of your computer, but please ensure that your database software is listening to all network interfaces and there is no firewall blocking incoming traffic.
If your LAN address is dynamic, you could use an internal DNS name pointing to your computer if your network setup provides one.
Another option is to run your database inside the kubernetes cluster: this way you could use it's service name as the hostname.
Option 1 - Local Networking Approach
If you are running minikube, I would recommend taking a look at the answers to this question: Routing an internal Kubernetes IP address to the host system
Option 2 - Tunneling Solution: Connect to an External Service
A very simple but a little hacky solution would be to use a tunneling tool like ngrok: https://ngrok.com/
Option 3 - Cloud-native Development (run everything inside k8s)
If you plan to follow the suggestions of whites11, you could make your life a lot easier with using a kubernetes-native dev tool such as DevSpace (https://github.com/covexo/devspace) or Draft (https://github.com/Azure/draft). Both work with minikube or other self-hosted clusters.
Related
I have built a .NET Core Azure Function using a ServiceBusTrigger. The function works fine when deployed in a regular App Service plan once the appropriate Application settings, such as the Service Bus connection string.
However, I would prefer to host the function as a Docker container on Azure Kubernetes Service (AKS). I have AKS setup and have a number of .NET Core Docker containers running fine there, including some Azure Functions on TimerTriggers.
When I deploy the function using the ServiceBusTrigger, it fails to properly tun and I get "Function host is not running." when I visit the functions IP address. I believe this is because the app settings are not being found.
The problem is I do not know how to include them when hosting in the Docker/Kubernetes environment. I've tried including the appropriate ENV entries in the Docker file, but then I cannot find the corresponding values in the deployment YAML viewed via the Kubernetes dashboard after I've successfully run func deploy from PowerShell.
Most of the Microsoft documentation addresses the TimerTrigger and HttpTrigger cases, but I can find little on the ServiceBusTrigger when using Docker/Kubernetes.
So, how do I include with the appropriate app settings with my deployment?
From this Blog :Playing with Azure Functions kubernetes integration, you could find a description about add environment variables.
In the deployment.yml, add the env(like AzureWebJobsStorage as an environment variable).
containers:
- image: tsuyoshiushio/queuefunction-azurefunc
imagePullPolicy: Always
name: queuefunction-deployment
env:
- name: AzureWebJobsStorage
value: YOUR_STORAGE_ACCOUNT_CONNECTION_STRING_HERE
ports:
- containerPort: 80
protocol: TCP
Then apply it, it will works.
I'm working locally on OSX.
I'm using Kafka and zookeeper in local mode. Meaning zookeeper from my Kafka installation. One node cluster.
Both work on the loopback localhost
zookeeper.connect=localhost:2181
My /etc/host looks as follows:
127.0.0.1 localhost MaatPro.local
255.255.255.255 broadcasthost
::1 localhost MaatPro.local
fe80::1%lo0 localhost MaatPro.local
I have docker for Mac set on my machine, with the Kubernetes extension.
My scenario
I have an Akka-stream micro-service dockerized, that reads data from an external database and write it in a Kafka topic. It uses as bootstrap server:
"localhost:9092"
Issue
When I run my service on my machine directly (e.g. command line or from within Intellij) everything works fine. When I run it on my Local Docker or Kubernetes I get the following error:
(o.a.k.clients.NetworkClient) [Producer clientId=producer-1] Connection to node 0 could not be established. The broker may not be available.
With Kubernetes I build the following YAML File to deploy my pod:
apiVersion: v1
kind: Pod
metadata:
name: fetcher
spec:
hostNetwork: true
containers:
- image: entellectextractorsfetchers:0.1.0-SNAPSHOT
name: fetcher
I took the precaution to set hostNetwork: true
With Docker daemon directly I originally tried to set the network as host too but discovered that this does not work with docker for mac. Hence, I abandoned that route. I understood that it has to do with virtualization.
1) Does the virtualization issue that happens with docker is actually the same as my local kubernetes? Basically, the host network is the virtual machine and not my mac?
2) I try to change my code and add as a bootstrap server the following address: host.docker.internal as per the documentation. But the problem persists. Is the fundamental problem the fact that I am working on a loopback address? Shall I work on my network address indeed? To which address does host.docker.internal point to? How can I make it work with the loopback address? If I'm completely off, any idea of what I need to implement to get this working?
Thank you so much for any help with this.
Based on #cricket_007 guidance Kafka Listeners - Explained and the many read I have had here and there over this issue, including the official docker documentation I Want to connect from a container to a servicer on the host
I came up with the following solution.
I added the following to my default local kafka configuration (i.e. server.properties)
listeners=EXTERNAL://0.0.0.0:19092,INTERNAL://0.0.0.0:9092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=INTERNAL://127.0.0.1:9092,EXTERNAL://docker.for.mac.localhost:19092
inter.broker.listener.name=INTERNAL
In fact External here, is expected to be the docker network. This config is only for my OSX machine for my local development purpose. I do not expect people connecting to my laptop to use my local kafka hence i can use EXTERNAL://docker.for.mac.localhost:19092. This is what is advertised to my container in docker/kube. From within that network, docker.for.mac.localhost is reachable.
Note this would probably not work with Minikube. This is specific to Docker For Mac. The kubernetes that I run on my machine is the one coming with docker for mac and not minikube.
Finally in my code i use both in a list
"localhost:9092, docker.for.mac.localhost:19092"
I use typeSafe config, so that in prod, this is erased by the env variable. When the env variable is not specified, this is what is used. When i run my micro-service from Intellij localhost:9092 is used. That’s because In that case, i am in the same network as my kafka/zookeeper in my machine. However when I run the same micro-service from docker/kube docker.for.mac.localhost:19092 is used.
Answers to the side questions I had
Yes. Docker for Mac use HyperKit as a lightweight virtual machine, running a linux on it, and Docker Engine is ran on it. The Docker for MAC Kubernetes extension is basically about running kubernetes cluster Services/infrastructure as containers in the docker daemon. Docker for Mac vs. Docker Toolbox . In other words, the host is hyperkit and not osx. But as the above doc explain, Docker for Mac implementation is all about making it appear to the user as if there were no virtualization involve between OSX and Docker.
Connecting to the host using loopback address is an issue that has not been solved. I'm not even sure that it works perfect even if the host is Linux. Not sure, might have been resolved at this point. Nonetheless, it would require to run an image by stating that the container or the pod in the case of kube are on the host network. But in docker for mac, that functionality will never work based on my readings online. Hence the solution of using docker.for.mac.localhost or host.docker.internal, that Docker for Mac did set up to refer to the mac host and not the hyperkit host.
host.docker.internal and docker.for.mac.localhost are one of the same and the late recommendation at this point is host.docker.internal. This being said, this address did not originally work for me because my Kafka Set up was not good. It is worth readying #criket_007 link to understand that well http://rmoff.net/2018/08/02/kafka-listeners-explained.
Following #MaatDeamon approach I only did the following and it worked for me.
advertised.listeners=PLAINTEXT://:9092
And in your application config properties
localhost:9092,host.docker.internal:9092
I just tried setting up kubernetes on my bare server,
Previously I had successfully create my docker compose
There are several apps :
Apps A (docker image name : a-service)
Apps B (docker image name : b-service)
Inside Application A and B there are configs (actually there are apps A,B,C,D,etc lots of em)
The config file is something like this
IPFORSERVICEA=http://a-service:port-number/path/to/something
IPFORSERVICEB=http://b-service:port-number/path/to/something
At least above config work in docker compose (the config is inside app level, which require to access another apps). Is there any chance for me to access another Kubernetes Service from another service ? As I am planning to create 1 app inside 1 deployment, and 1 service for each deployment.
Something like:
App -> Deployment -> Service(i.e: NodePort,ClusterIP)
Thanks !
Is there any chance for me to access another Kubernetes Service from
another service ?
Yes, you just need to specify DNS name of service (type: ClusterIP works fine for this) you need to connect to as:
<service_name>.<namespace>.svc.cluster.local
In this case such domain name will be correctly resolved into internal IP address of the service you need to connect to using built-in DNS.
For example:
nginx-service.web.svc.cluster.local
where nginx-service - name of your service and web - is apps's namespace, so service yml definition can look like:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: web
spec:
ports:
- name: http
protocol: TCP
port: 80
selector:
app: nginx
type: ClusterIP
See official docs to get more information.
Use Kubernetes service discovery.
Service discovery is the process of figuring out how to connect to a
service. While there is a service discovery option based on
environment variables available, the DNS-based service discovery is
preferable. Note that DNS is a cluster add-on so make sure your
Kubernetes distribution provides for one or install it yourself.
Service dicovery by example
I am setting up 2 VPC on GCP, I setup kubeadm on each, let's call them kubemaster and kubenode1. So I ran kubeadm on kubemaster and kubenode1 which :
kubeadm init on kubemaster
kubeadm join on kubenode1
When I was trying to kubectl apply -f (a deployment which contains a pod with simple webapps inside) and kubectl apply -f (a NodePort type of Service which target the deployment port)
After that I simply access the webapps from my browser (on my local machine not on GCP), it just does not work as what I tried on minikube (I setup minikube with same kubectl apply as above too). I dig some search and there are a lot of people saying regarding Ingress and network layer (flannel in kubernetes website example)
My question is what are these Ingress and flannel ? Which one is necessary or both are not necessary at all if I just want my webapp run ? How does each other works against others ? Because from my understanding the layering is as per below :
Traffic -> Services -> Deployments/Pods
Where are these ingress and flannel suits to ? If its not about them both, why my apps does not work as intended (i open all port in GCP setting so its not security issue I suppose), I tried setting up Kubernetes Dashboard-UI, run kubectl proxy and still my browser cannot access both services (my webapp inside the deployment and also Dashboard API), may be I am a little bit lost here.
The flannel and the Ingress are completely different things.
flannel is a CNI or Container Network Interface plugin which task is networking between containers. As coreOS says:
each container is assigned an IP address that can be used to
communicate with other containers on the same host. For communicating
over a network, containers are tied to the IP addresses of the host
machines and must rely on port-mapping to reach the desired container.
This makes it difficult for applications running inside containers to
advertise their external IP and port as that information is not
available to them.
flannel solves the problem by giving each container an IP that can be
used for container-to-container communication. It uses packet
encapsulation to create a virtual overlay network that spans the whole
cluster. More specifically, flannel gives each host an IP subnet (/24
by default) from which the Docker daemon is able to allocate IPs to
the individual containers.
The Kubernetes supports some other CNI plugins: Calico, weave, etc. They vary according to functionality ( e.g. supporting features like NetworkPolicy for restricting resources )
The Ingress is a Kubernetes object which is usually operate at the application layer of the network stack (HTTP) and allow you to expose your Service externally, it also provides a features such as HTTP requests routing, cookie-based session affinity, HTTPS traffic termination and so on. (just like a web server Nginx or Apache)
I want to add few more points along with exiting answers.
After that I simply access the webapps from my browser (on my local
machine not on GCP), it just does not work as what I tried on minikube
Did you open the security rules/firewall rules for the NodePort? On which instance did you open and which instance are you hitting to access your app?
My question is what are these Ingress and flannel?
I recommend you to read offical docs. But anyway, since you asked the question, I would like to tell few words.
Flannel is a overrelay network for containers which the subnet for the container can span across multiple nodes(Which is opposite to native docker networking-host n/w, NAT, etc). Each containers gets it own IP every time it spawn. The flannel is more like control plain for container network which is internal to K8s
Highly recommend you to read How Flannel N/W works
Ingress is smart router for the load balancer(Or simple for now, we can say it exposes the application to out side of K8s). It works at application level. Once you hit the "Ingress" enpoint, it will forward to service(which depends on ingress rules) and then to app pod.
A blog on Ingress - https://medium.com/#cashisclay/kubernetes-ingress-82aa960f658e
I see you were talking about ClusterIP. Generally, the the ClusterIP is the IP for the K8s service which is nothing but a magic of "IP Tables Rules". Kube-Proxy is responsible to write ip table rules in every node once you define "Service". These ip table rules or ClusterIP points to actual pod IP(The IP assigned by flannel daemon). I hope you can understand, how flannel and "Ingress" fit into the picture or work together or responsible for application traffic.(Please correct if I'm wrong..!!)
Can you paste ingress controller yaml content? What are the rules you defined?
Since you are using GCP, why don't you try GKE? I mean it is easy to deploy, besides you can access your application with LoadBalancer instead of depending on Ingress(Anyway, its none of my business :-) )
Said short, flannel or pod-to-pod networking layer in general, is what enables pods to talk to each other in Kubernetes. Ingress Controller on the other hand is what takes Ingress objects and turns them into rules for receiving and forwarding (mostly) HTTP(S) traffic to the backing services, over pod-to-pod network.
As you can see, technically, you need only the first one (pod-to-pod networking) as you can directly expose your service somewhere with NodePort or LoadBalancer service, it is very convenient though to use Ingress if you expose multiple services (pretty much like you do with vhosts on classic web server installations.
I have successfully connect my Kubernetes-Cluster with Gitlab. Also I was able to install Helm through the Gitlab UI (Operations->Kubernetes)
My Problem is that if I click on the "Install"-Button of Ingress Gitlab will create all the nessecary stuff that is needed for the Ingress-Controller. But one thing will be missed : external IP. External IP will mark as "?".
And If I run this command:
kubectl get svc --namespace=gitlab-managed-apps ingress-nginx-ingress- controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'; echo
It will show nothing. Like I won´t have a Loadbalancer that exposes an external IP.
Kubernetes Cluster
I installed Kubernetes through kubeadm, using flannel as CNI
kubectl version:
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"}
Is there something that I have to configure before installing Ingress. Do I need an external Loadbalancer(my thought: Gitlab will create that service for me)?
One more hint: After installation, the state of the Nginx-Ingress-Controller Service will be stay on pending. The reason for that it is not able to detect external IP. I also modified the yaml-File of the service and I manually put the "externalIPs : -External-IP line. The output of this was that it was not pending anymore. But still I couldn't find an external IP by typing the above command and Gitlab also couldn´t find any external IP
EDIT:
This happens after installation:
see picture
EDIT2:
By running the following command:
kubectl describe svc ingress-nginx-ingress-controller -n gitlab-managed-apps
I get the following result:
see picture
In Event log you will see that I switch the type to "NodePort" once and then back to "LoadBalancer" and I added the "externalIPs: -192.168.50.235" line in the yaml file. As you can see there is an externalIP but Git is not detecting it.
Btw. Im not using any of these cloud providers like AWS or GCE and I found out that LoadBalancer is not working that way. But there must be a solution for this without LoadBalancer.
I would consider to look at MetalLB as for the main provisioner of Load balancing service in your cluster. If you don't use any of Cloud providers in order to obtain the entry point (External IP) for Ingress resource, there is option for Bare-metal environments to switch to MetalLB solution which will create Kubernetes services of type LoadBalancer in the clusters that don’t run on a cloud provider, therefore it can be also implemented for NGINX Ingress Controller.
Generally, MetalLB can be installed via Kubernetes manifest file or using Helm package manager as described here.
MetalLB deploys it's own services across Kubernetes cluster and it might require to reserve pool of IP addresses in order to be able to take ownership of the ingress-nginx service. This pool can be defined in a ConfigMap called config located in the same namespace as the MetalLB controller:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 203.0.113.2-203.0.113.3
External IP would be assigned to your LoadBalancer once ingress service obtains IP address from this address pool.
Find more details about MetalLB implementation for NGINX Ingress Controller in official documentation.
After some research I found out that this is an Gitlab issue. As I said above, I successfully build a connection to my cluster. Since Im using Kubernetes without cloud providers it is not possible to use the type "LoadBalancer". Therefore you need to add an external IP or change the type to "NodePort". This way you can make your Ingress-Controller accessible outside.
Check this out: kubernetes service external ip pending
I just continued the Gitlab tutorial and it worked.