I am installing jenkins on GKE.
I want to use ingress (so to avoid the LoadBalancer) but I also want it to have TLS enabled.
Here are the ingress - related values:
ingress:
enabled: false
# For Kubernetes v1.14+, use 'networking.k8s.io/v1beta1'
apiVersion: "extensions/v1beta1"
labels: {}
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# Set this path to jenkinsUriPrefix above or use annotations to rewrite path
# path: "/jenkins"
# configures the hostname e.g. jenkins.example.com
hostName:
tls:
# - secretName: jenkins.cluster.local
# hosts:
# - jenkins.cluster.local
Assuming I already have a CloudDNS (routable to my-network.mydomain.net) and I want jenkins accessible via jenkins.my-network.mydomain.net, how should I configure the above values?
What is the usefulness of the values.ingress.tls.secretName?
In case I enable tls, what will be the issuing authority of the corresponding certificate? Is this handled automatically by GCP?
The ingress that you will setup will need one loadBalancer. This load balancer will be receiving traffic from client and forward it to the ingress controller(gke ingress, nginx etc). So you are really not avoiding loadbalancer completely in this case.
The ingress is used to avoid creation of load balancers exponentially if you are using kubernetes service of type LoadBalancer to serve external clients.In your case the jenkins master service instead of exposing via load balancer directly you can choose an ingress to avoid more than one load balancer creation.
What is the usefulness of the values.ingress.tls.secretName?
It tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for jenkins.cluster.local.
You also need to create a secret with name jenkins.cluster.local
apiVersion: v1
kind: Secret
metadata:
name: jenkins.cluster.local
namespace: default
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
type: kubernetes.io/tls
In case I enable tls, what will be the issuing authority of the
corresponding certificate? Is this handled automatically by GCP?
It's not automatically handled by GCP. Check Options for providing SSL certificates section from the official docs
Out of all 3 options I believe you need to follow Self-managed certificates as Secret resources and provision your own SSL certificate and create a Secret to hold it. You can then refer to the Secret in an Ingress specification to create an HTTP(S) load balancer that uses the certificate. Refer to the instructions for using certificates in Secrets for more information.
Related
I am running Azure AKS with Kubenet networking, in which I have deployed several services, exposed on several ports.
I have configured a URL based routing and it seems to work for the services I could test.
I found out the following:
sending URL and URL:80, returns the desired web page, but the URL displayed in the browser's address bar is removing the port, if I send it. Looks like http://URL/
When I try accessing other web pages or services, I get a strange phenomena: Calling the URL with the port number, is waiting until the browser says it's unreachable. Fiddler returns "time out".
When I access the service (1 of 3 I could check visibly) and not provide the port, the Ingress rules I applied answer the request and I get the resulting web page, which is exposed on the internal service port.
i'm using this YAML, for rabbit management page:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rabbit-admin-on-ingress
namespace: mynamespace
spec:
rules:
- host: rabbit.my.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rabbitmq
port:
number: 15672
ingressClassName: nginx
and also, apply this config (using kubectl apply -f config.file.yaml):
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
15672: "mynamespace/rabbitmq:15672"
What happens is:
http://rabbit.my.local gets the rabbit admin page
http://rabbit.my.local:15672 get a time out and I get frustrated
It seems this is also happening on another service I have running on port 8085 and perhaps even the DB running on the usual SQL port (might be a TCP only connection)
Both are configured the same as the rabbitmq service in the yaml rules and config file, with their respected service names, namespaces and ports.
Please help me to figure out how I can make Ingress accept the URLs with the :PORT attached to it and answer them. Save me.
A quick reminder - :80 works fine. Perhaps because it's one of the defaults for Ingress
Thank you so much in advance.
Moshe
I have a fairly simple setup in my kubernetes cluster, with two zones:
Low trust (public facing)
Medium trust (non public)
Both zones have Istio enabled, with:
Ingress gateway with SSL enabled. For testing within my local docker desktop, I use port 443 for the public facing, and port 443 for medium trust
Virtual service
Destination rule
I am deploying apache HTTPD - acting as a reverse proxy within the low trust. The plan is for the HTTPD to then forward the traffic to istio ingress gateway in the medium trust.
Within the medium trust is a Spring boot application.
So, lets say, user is accessing https://lowtrust.avengers.local/avengers. This request will be serviced by the ingress gateway in the lowtrust, and will end up in the HTTPD, which then forward the request to ingress gateway in mediumtrust.
LOWTRUST MEDIUMTRUST
| GW--> VS-->HTTPD Pod|======>| GW --> VS -->Java Pod|
I have created a github repo to demonstrate this:
https://github.com/alexwibowo/avengersKubernetes
The HTTP proxy configuration is here: https://github.com/alexwibowo/avengersKubernetes/blob/main/httpd/conf/proxy.conf.
The Istio ingress gateway for lowtrust:
https://github.com/alexwibowo/avengersKubernetes/blob/main/kubernetes/avengers/charts/avengers-istio/templates/istio-httpd.yaml
and istio ingress gateway for mediumtrust:
https://github.com/alexwibowo/avengersKubernetes/blob/main/kubernetes/avengers/charts/avengers-istio/templates/istio-app.yaml
As you can see, both gateways have their own certs configured. At the moment, I kind of 'cheat' by modifying my /etc/host file to have the following:
127.0.0.1 lowtrust.avengers.local
<CLUSTER_IP_ADDRESS> mediumtrust.avengers.local
By doing this, when HTTPD pod making request to 'mediumtrust.avengers.local', it will get directed to the istio ingress gateway (thats my understanding anyway).
I've heard that you can actually set up a mutual TLS for the scenario I've described above. With this approach, I wont need to setup the certificate in my mediumtrust ingress gateway - and just use 'ISTIO_MUTUAL'. I think for this, I will also need to set up a 'proxy' service & virtual service in the lowtrust namespace. The virtual service will then manage the communication between lowtrust & mediumtrust. But I'm not 100% how to do this.
Any help / advice is much appreciated!
Edit 1 (2021/07/01)
I've been reading more about this topic. So another option, is to have Service of type 'ExternalName' within the 'lowtrust' namespace.
Which, if I might use the analogy, will act like a 'proxy' for connecting to the service on the other namespace.
e.g.:
apiVersion: v1
kind: Service
metadata:
name: cr1-avengers-app
namespace: "lowtrust"
spec:
type: ExternalName
externalName: "cr1-avengers-app.mediumtrust.svc.cluster.local
ports:
- port: 8081
targetPort: 8080
protocol: TCP
name: http
But by using this, I will effectively bypass the Istio VirtualService, DestinationRule that I've defined on the mediumtrust namespace.
The way I've managed to solve this locally is by having an entry in my windows hostfile.
E.g.:
127.0.0.1 lowtrust.avengers.local
10.109.161.243 mediumtrust.avengers.local
10.109.161.243 is the Cluster IP address for my istio-ingressgateway. I got this by running kubectl get svc -n istio-system from command line.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.109.161.243 localhost 15021:30564/TCP,80:31834/TCP,443:31828/TCP,445:32700/TCP,15012:30459/TCP,15443:30397/TCP 21d
I was also missing 'SSLProxyEngine' flag in my reverse proxy configuration. So in the end my VirtualHost configuration looks like below:
E.g.:
<VirtualHost *:7000>
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
SSLProxyEngine on
ProxyPass /avengers https://mediumtrust.avengers.local/avengers
ProxyPassReverse /avengers https://mediumtrust.avengers.local/avengers
CustomLog "/tmp/access.log" common
ErrorLog /tmp/error.log
</VirtualHost>
I think I have interesting use case so I would like to hear advices of the people with more knowledge.
I have my App ("ads") which works in Kubernetes without any issue. It runs on port 9000.
It has args which have its instance name (serverName) and in the list of servers it has reference to all other servers as well (servers) in order to run those servers in so called companion mode needed for performance reasons.
Please have in mind that this is NOT a WEB SERVER and simple replicas will NOT work for what we need to achieve and that is to have multiple ADS servers working in so call companion mode in which the primary server sends the cached data to another server so that server also has the recent data and can take over in case of failure.
Extricate from the first ADS YAML file:
-in serverName we specify the name of the server instance
-in servers arg we specify regular ADS server address with its port.
args:
....
"-serverName", "ads"
"-servers", "{ { ads , ads-test:9000 }, { ads2 , ads-test2:9000}"]
ports:
- containerPort: 9000
..................
kind: Service
metadata:
name: ads-test
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 9000
selector:
app: ads-test
So in the list of arguments we specify the Service through which we should access that ADS instance using the TCP connection (not HTTP connection) with ads-test:9000. Since this is containerized app I did not know what else I could specify as server address except of "ServiceName:port", because development of this app did not suppose containerized app.
So the second one YAML should be different only with serverName info.
And I added additional Service ads-test2
args:
....
"-serverName", "ads2"
"-servers", "{ { ads , ads-test:9000 }, { ads2 , ads-test2:9000}"]
ports:
- containerPort: 9000
..................
kind: Service
metadata:
name: ads-test2
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 9000
selector:
app: ads-test2
Since this is actually the same App, but only with one argument different in its configuration (serverName) I was wondering if there is some way how I could simplify this and use single Service in order to access both ADS servers but to have this configuration within the server argument which actually activates this kind of companion mode which is needed for performance reasons for usage of different servers but to have information up to date on both servers.
Thank you
No you can not have single service for two logically different pods . Service normally does load balancing between replica pods. So user request on your pods will be automatically route to any of the pod by the service automatically.So in your case you don't want this to happen. Request for ads can lie to ads2 server pod.
recommended way is to have two different services for your pods or you can have multiple containers inside single pod and have single service in that case.
and argument for server name can be taken from environment.
Env:
key:
value:
I am migrating my current service to Kubernetes. Currently back end services are resolved via mod_cluster. mod cluster manager runs on httpd and mod_cluster clients auto register their web contexts with httpd/mod_cluster manager on startup
user-->ingress-rule--> httpd [running mod_cluster manager]--> Jboss[mod_cluster clients]
I resolve my UI via the following ingress rule
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: httpd
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: myk8s.myath.myserv.com
http:
paths:
- path: /
backend:
serviceName: httpd
servicePort: 443
tls:
- hosts:
- myk8s.myath.myserv.com
This works well, resolves UI, can log in and resolve all static content etc.
Mod cluster exposes services such as myservice. I disabled mod_cluster and created a Kubernetes service myservice that resolved to the back-end Pod thinking that the Ingress rule would get the request as far as httpd and then httpd would be able to resolve the backend service via Kubernetes but i get 404s as I am unable to resolve myservice
Service can be resolved via Reverse proxy rules such as below, but this is not preferred solution
# Redirect to myjbossserv
ProxyPass /myservice/services/command/ http://myjbossserv:8080/myservice/services/command/ <-----myjbossserv is a service registered in kubernetes
ProxyPassReverse /myservice/services/command/ http://myjbossserv:8080/myservice/services/command/
Any help much appreciated
The simplest way to solve this...catering for all HA and robustness use cases was to use reverse proxy rules. There are multiple ways to configure these such as at image build time or via config maps...
# Redirect to myjbossserv
ProxyPass /myservice/services/command/ http://myjbossserv:8080/myservice/services/command/ <-----myjbossserv is a service registered in kubernetes
ProxyPassReverse /myservice/services/command/ http://myjbossserv:8080/myservice/services/command/
I am running a restfull service behind self signed cert thru NGINX in google cloud kubernetes infrastructure.
Kubernetes service loader exposes 443 and routes the traffic those containers. All is working just fine as expected other than asking internal clients to ignore the self sign cert warning!
It is time for to move to CA cert thus only option as far as I see is https loader but I couldnt figure out how we can reroute the traffic to service loader or directly to pods as service loader(http loader)
Any help apprecaited
Update Firewall Rules for:
IP: 130.211.0.0/22
tcp:30000-32767
Create NodePort type service:
apiVersion: v1
kind: Service
metadata:
name: yourservicenodeport
labels:
name: your-service-node-port
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
selector:
name: yourpods
Create health check.
For the nodeport which is in this case: 30001
Create an ingress service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: youTheking-ingress-service
spec:
backend:
serviceName: yourservice
servicePort: 80
Wait for few minutes, be patient.
Change the health check on http load balancer.
a. Go to the Load Balancing on Networking Tab.
b. Click Advance menu.
c. Go Backend Services and Edit.
d. Update health check option and use the one created for nodeport service.
Repeat step 5 for instance group to be recognized health.
SSL is needed, go back to the load balancer, edit, click Frontend Configuration, then add https with cert.
You are ready to roll.
I'm not sure I fully understand you question but I'll try to answer it anyway.
You have two options for exposing your service using a cert signed by a trusted CA:
Do what you are doing today but with the real cert. You will probably want to put the cert into a secret and point your nginx configuration at it to load the cert.
Replace nginx with the google L7 load balancer. You would upload your certificate to google, configure the L7 balancer to terminate HTTPS and forward traffic to your backends.