I have a two node kubernetes cluster on a linux server and I use the kubernetes api to pull stats about them using a http api through kubeproxy. However I haven't found any good documentation on how to use https. I am kinda new to setting up environments so a lot of the high level documentation goes over me.
You need to configure https on kubernetes api server. You can check Kelseys's https://github.com/kelseyhightower/kubernetes-the-hard-way
Also look at this doc https://kubernetes.io/docs/admin/accessing-the-api/
Configuring kubernetes SSL manualy may be hard. If you have troubles try to use kubeadm utility - https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
Related
So I am using JHipster to build my micro service architecture. I am at a point it is almost time to fully deploy my gateway, but i have an issue.
SSL/TLS specifically i use cloud flare to proxy my public ip, and provide certificates. I use kemp for layer 7 routing incoming traffic. My app is housed inside a Kubernetes cluster. My problem is i cant get ssl/tls to work right and I don't even know where to begin on how to add cloudflare certificates to my JHipster gateway.
Any suggestions or help would be greatly appreciated I have been looking for two weeks now trying to solve it on my own.
Have you read JHipster doc about TLS? https://www.jhipster.tech/production/#security
One point to think about is whether or not you want to expose your gateway publicly or if you prefer to put it behind a reverse proxy (e.g. nginx).
If you for the reverse proxy, you'll find plenty of resources to explain how to do it.
If you want to expose your gateway directly then it's not specific to JHipster, it's the same as for any java application, you must import your certificate into a KeyStore.
You can do it using JDK's keytool or simpler using KeyStore Explorer.
After that you might have to find a way to do it Kubernetes but I can't help here.
Okay, I have a DB consisting of several nodes deployed to GKE.
The deployment.yaml adds each node as ClusterIP, which makes sense. Here is the complete deployment file:
https://github.com/dgraph-io/dgraph/blob/master/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml
For whatever reason, the DB has zero security functionality, so I cannot expose any part using a LoadBalancer service because doing so would give unsecured access to the entire DB. The vendor argues that security is solely the user's problem. The AlphaNode comes with an API endpoint, which is also unsecured, but I actually want to connect to that API endpoint from an external IP.
So, the best I can do is adding an NGNIX as a (reverse) proxy with authentication to secure access to the API endpoint of the Alpha node(s). Practically, I have three alpha nodes so adding load balancing makes sense. I found a config that does load balancing to three alpha nodes in Docker Compose although, without authenication.:
https://gist.github.com/MichelDiz/42954e321620159c872c35c20e9d85c6
Now, the million-dollar question I have is, how do I add an NGNIX load balance to Kubernetes that authenticates and load balances incoming traffic to my (ClusterIP) alpha nodes?
Any pointers?
Any help?
If you want to do it that hard way, you can deploy your own nginx deployment and expose it as LoadBalancer Service. You can configure it with different authentication mechanisms that nginx support.
Instead, you can use Ingress resource backed by an IngressController that supports authentication. Check if your kubernetes distribution provides an IngressController and if it is supports auth. If not, you can install nginx or Traefik IngressControllers which supports authentication.
Looks like GKE ingress has recently added support for IAP bassed authentication which is still in beta - https://cloud.google.com/iap/docs/enabling-kubernetes-howto
If you are looking for more traditional type of authentication with ingress, install nginx or traefik and use the kubernetes.io/ingress.class annotation so that only IngressController claims your ingress resource - https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
This is more of a research question. If it does not meet the standards of SO, please let me know and I will ask elsewhere.
I am new to Kubernetes and have a few basic questions. I have read a lot of doc on the internet and was hoping someone can help answer few basic questions.
I am trying to create an integration with Kubernetes (user applications running inside Docker containers to be precise) and my application that would act as a backup for certain data in the containers.
My application currently runs in AWS. Would the Kube cluster need to run in AWS as well ? Or can it run in any cloud service or even on-prem as long as the APIs are available ?
My application needs to know the IP of the Master node API server to do POST/GET requests and nothing else ?
For authentication, can I use AD (my application uses AD today for a few things). That would also give me Role based policies for each user. Or do I have to use the Kube Token Reviewer API for authentication always ?
Would the applications running in Kubernetes use the APIs I provide to communicate with my application ?
Would my application use POST/GET to communicate with the Kube Master API server ? Do I need to use kubectl for this and above #4 ?
Thanks for your help.
Your application needn't exist on the same server as k8s. There are several ways to connect to k8s cluster, depending on your use case. Either you can expose the built-in k8s API using kubectl proxy, connect directly to the k8s API on the master, or you can expose services via load balancer or node port.
You would only need to know the IP for the master node if you're connecting to the cluster directly through the built-in k8s API, but in most cases you should only be using this API to internally administer your cluster. The preferred way of accessing k8s pods is to expose them via load balancer, which allows you to access a service on any node from a single IP. k8s also allows you to access a service with a nodePort from any k8s node (except the master) through a preassigned port.
TokenReview is only one of the k8s auth strategies. I don't know anything about Active Directory auth, but at a glance OpenID connect tokens seem to support it. You should review whether or not you need to allow users direct access to the k8s API at all. Consider exposing services via LoadBalancer instead.
I'm not sure what you mean by this, but if you deploy your APIs as k8s deployments you can expose their endpoints through services to communicate with your external application however you like.
Again, the preferred way to communicate with k8s pods from external applications is through services exposed as load balancers, not through the built-in API on the k8s master. In the case of services, it's up to the underlying API to decide which kinds of requests it wants to accept.
I have developed a spring boot based REST API service and enabled https on it by using a self signed cert keystore (to test locally), and it works well.
server.ssl.key-store=classpath:certs/keystore.jks
server.ssl.key-store-password=keystore
server.ssl.key-store-type=PKCS12
server.ssl.key-alias=tomcat
Now, I want to package a docker image and deploy this service in a kubernetes cluster. I know I can expose the service as a NodePort and access it externally.
What I want to know is, I doubt that my self signed cert generated in local machine will work when deployed in kubernetes cluster. I researched and found a couple of solutions using kubernetes ingress, kubernetes secrets, etc. I am confused as to what will be the best way to go about doing this, so that I can access my service running in kubernetes through https. What changes will I need to do to my REST API code?
UPDATED NOTE : Though I have used a self signed cert for testing purposes, I can obtain a CA signed cert from my company and use it for production. My question is more on the lines of, For a REST API service which already uses a SSL/TLS based connection, what are some of the better ways to deploy and access the cert in kubernetes cluster , eg: package in the application itself, use Secrets, or scrap the application's SSL configuration and use Ingres instead, etc. Hope my question makes sense :)
Thanks for any suggestions.
Well it depends on the way you want to expose your service. Basically you have either an ingress, an external load balancer (only in certain cloud evironments available) or a Service thats routed to a Port (either via NodePort or HostPort) as options.
Attention: Our K8S Cluster is self hosted so I have no reliable information about external load balancers in K8S and will therefore omit that option.
If you want to expose your service directly behind one of your domains on port 80 (e.g. https://app.myorg.org) you'll want to use ingress. But if you don't need that and you can live with a specific port the NodePort approach should do the trick (e.g. https://one.ofyourcluster.servers:30000/).
Let's assume you want to try the ingress approach than you need to add the certificates to the ingress definition in K8S instead of the spring boot application or you must additionally specify that the service is reachable via https itself in the ingress. The way to do it may differ from ingress controller to ingress controller.
For the NodePort/HostPort you just need to enable SSL in your application.
Despite that you also need a valid certificate e.g. issued by https://letsencrypt.org/
Actually for K8S there are some projects that can fetch you a letsencrypt certificate automatically if you to use ingresses. (e.g. https://github.com/jetstack/cert-manager/)
Is JIRA supported in GCE? If so, how to make it work?
We have installed 64-bit .bin of JIRA(6.4.1), and opened necessary custom http ports under Networks.
Started JIRA as service, but unable to see it work via browser. No error message than, timed out error!
Any help would be highly appreciated.
Note: We are new to Google Cloud Platform.
Did you enable the http and https services on your instance ? By default the GCE instance does not allow Http and Https traffic, you have to do it manually.
The Jira configuration for Google Compute Engine can be tricky. You need to make sure that:
The firewall rules under Netowrking allows a connection to Jira HTTP port or the HTTP enables in VM properties
The global Networking rules allow TCP traffic on this port
The virtual network have routes configured
If you use Apache as proxy for Jira (recommended) then make sure Apache is configured to point to the Tomcat port
Your Tomcat is configured
You have enabled port allocation using setcap utility
Your local machine firewall enables the connection (in Red Hat ipconfig is enabled by default and blocks the connections)
As you can see it may be tricky to install Jira on Google Cloud. It may be a good idea to use a deployment service like Deploy4Me to do this quickly and automatically.