GKE Windows Node, health check - gke-networking

I'm not used to posting things but I have a problem. On GKE, I set a service of type Loadbalancer for windows pods on windows nodes but the health check of the load balancer is KO for windows nodes. I can use externalTrafficPolicy: cluster but i need Local for my applications.
How to fix it ?
Thanks for your help.
Alexandre Gué,
Team leader Infra
PI ELECTRONIQUE

Have you considered the Health check ranges for different Load Balancers?
For health checks to work, you must create ingress allow firewall
rules so that traffic from Google Cloud probers can connect to your
backends.
A firewall rule should have created per this doc:
The following example creates an ingress firewall rule for the
following load balancers:
Internal TCP/UDP Load Balancing (health checks)
Internal HTTP(S) Load Balancing (health checks)
TCP Proxy Load Balancing (health checks)
SSL Proxy Load Balancing (health checks)
HTTP(S) Load Balancing (health checks and legacy health checks)
For these load balancers, the source
IP ranges for health checks (including legacy health checks if used
for HTTP(S) Load Balancing) are:
35.191.0.0/16
130.211.0.0/22
Also be aware of Windows Server Containers limitations:
There are some Kubernetes features that are not yet supported for
Windows Server containers. In addition, some features are
Linux-specific and do not work for Windows. For the complete list of
supported and unsupported Kubernetes features, see the Kubernetes
documentation.

Related

How do I serve my ECS ec2 server through https?

I am working backend server launched on ECS cluster, hosted on an EC2 instance using docker.
the ECS is running great, exposed by IP address and port, but to be used with my ios app it needs to be served over https.
How do serve my ECS container over https? I have read a couple of things regarding using a load balancer, but tutorials are outdated and I can't find one that shows configuration after the ecs cluster has already been created.
Please point me to the right direction so I can get it served over https.
You need to have the following resources:
DNS address
Valid SSL Certificate
Load Balancer
Load balancer security group
Target Group
The target group will mediate between your server and your load balancer.
Also, in the security group define all the rules you currently have in the server security group, and in the server's security group ad a rule that open is open to all traffic in all ports with the security group instead of id.
This guide can help you:https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html
(look at Create an HTTPS/SSL load balancer using the console)

Docker Reverse Proxy Ingress on Swarm

We operate a docker cluster with several workers and a manager.
Our current problem:
We have the Jwilery Nginx proxy running on all nodes which does not cause any problems. What causes us problems is, if we operate a service e.g. grav.
This is only then available, if the domain points to the IP address on which the service is running at that time.
My question now:
Is there a way to route the domain in such a way that we only have to set an A-record and Docker does the internal routing on the respective node where the website is running?
If yes, how would we realize this or are there other alternatives with which this is easier to implement?
Useful information:
1 manager
4 workers
a total of 5 ip addresses (Public)
All Barebone Server with Docker (Without Kubernetes etc.)
1 decentralized data server with NVME
Website can be called if the domain points to the judge Worker Target 1 Public IP for all domains with failover incl. Internal routing to the respective workers.
Resources:
To implement this, no resources are a shame. Other servers could also be used for this scenario.
ps: You could also contact me in other ways for testing purposes etc.pp.

Customize Docker reverse DNS

I'm looking for a way to change what the reverse DNS resolves to in Docker.
If I set my container's FQDN to foo.bar I expect a reverse DNS lookup for its IP to resolve to foo.bar, but it always resolves to <container_name>.<network_name>.
Is there a way I can change that?
Docker's DNS support is designed to support container discovery within a cluster. It's not an application traffic management solution, so features are limited.
For example it's possible to configure a DNS wildcard which resolves "*.foo.bar" urls to a server running a container savvy load balancer solution (A load balancer that knows where all the containers, associated with each application, are located and running).
That load balancer can then route traffic based on the incoming "Hostname" HTTP header:
"app1.foo.bar" -> "App1 Container1", "App1 Container2"
"app2.foo.bar" ->
"App2 Container1", "App2 Container2", "App2 Container3"
For a practical implementation take a look at how Kubernetes does load balancing (This is an advanced topic):
http://kubernetes.io/docs/user-guide/ingress/

Google Cloud Platform DataFlow workers IP addresses

Is it possible to know what range of external IP the DataFlow workers on GCP are using? The goal is to set-up some kind of IP filtering on an external service, so that only our DataFlow jobs running on GCP can access the service.
The best solution would be to upgrade so that you can use SSL or other mechanisms of strong authentication.
You can use the --network= option to control the GCE Network that the worker VMs are assigned to. Take a look at the GCE docs on networking for details on how to set up a VPN (like the comment from Elmar suggested). You could also look at setting up a single machine in the network with a static, external IP and using it as a proxy for the other VMs in the network.
This is not a use pattern we have tested, so there may be issues with latency or throughput of traffic through the proxy/VPN. You will likely need to be careful to only send your traffic through this proxy so that you don’t accidentally hijack the traffic used by each worker to communicate with the Dataflow service.

Does JIRA work on Google Compute Engine VM

Is JIRA supported in GCE? If so, how to make it work?
We have installed 64-bit .bin of JIRA(6.4.1), and opened necessary custom http ports under Networks.
Started JIRA as service, but unable to see it work via browser. No error message than, timed out error!
Any help would be highly appreciated.
Note: We are new to Google Cloud Platform.
Did you enable the http and https services on your instance ? By default the GCE instance does not allow Http and Https traffic, you have to do it manually.
The Jira configuration for Google Compute Engine can be tricky. You need to make sure that:
The firewall rules under Netowrking allows a connection to Jira HTTP port or the HTTP enables in VM properties
The global Networking rules allow TCP traffic on this port
The virtual network have routes configured
If you use Apache as proxy for Jira (recommended) then make sure Apache is configured to point to the Tomcat port
Your Tomcat is configured
You have enabled port allocation using setcap utility
Your local machine firewall enables the connection (in Red Hat ipconfig is enabled by default and blocks the connections)
As you can see it may be tricky to install Jira on Google Cloud. It may be a good idea to use a deployment service like Deploy4Me to do this quickly and automatically.

Resources