Connecting grails rabbitmq plugin to multiple hosts - grails

Is there a way to configure the grails rabbitmq plugin to connect to a clustered
rabbitmq environment for failover, or if there is alternative library/plugin I could use to achieve that.
grails 2.2.0
rabbitmq 1.0.0

I don't think there is an easy way to do this in grails alone...
I would recommend using a load balancer in front of your rabbitmq cluster. This allows you to route traffic to other nodes in the cluster if one fails. Once you have the load balancer configured, just point your rabbitmq.connectionfactory.hostname to the load balancer and it will do the rest!
Load balancer configuration varies depending on the type you use. If you don't already have a load balancer, HAProxy is a good option. There are some good examples online, and step-by-step instructions in the "RabbitMQ in Action" book (if you have it).

Related

How do I serve my ECS ec2 server through https?

I am working backend server launched on ECS cluster, hosted on an EC2 instance using docker.
the ECS is running great, exposed by IP address and port, but to be used with my ios app it needs to be served over https.
How do serve my ECS container over https? I have read a couple of things regarding using a load balancer, but tutorials are outdated and I can't find one that shows configuration after the ecs cluster has already been created.
Please point me to the right direction so I can get it served over https.
You need to have the following resources:
DNS address
Valid SSL Certificate
Load Balancer
Load balancer security group
Target Group
The target group will mediate between your server and your load balancer.
Also, in the security group define all the rules you currently have in the server security group, and in the server's security group ad a rule that open is open to all traffic in all ports with the security group instead of id.
This guide can help you:https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html
(look at Create an HTTPS/SSL load balancer using the console)

Simple configurable web server docker image for test environment

I have a microservice that sends HTTP requests to an external non-dockerized service.
Can anybody point me to a docker image of a simple web servicer, that I can start as part of my test environment? Ideally, it should be simple to customize (endpoints, ports, etc) and provide some meaningful logging of the incoming requests.
It depends on your preference for vendor. Here are some to choose from:
Linux:
Nginx
Apache httpd
Microsoft:
IIS
The links to those pages show you a few different distro's for each and contain the configuration information.
You can look at my project: https://github.com/mateuszgruszczynski/cinema its very crude and simple setup I use for performance test trainings. It contains few containers:
cinema-http / cinema-gateway - scala/akka based microservices
frontend - apache http server + simple php/js webpage
haproxy - haproxy as loadbalancer
plus some extra containers: postgres, mysql, jenkins, graphite, grafana
When it comes to dockerfiles and composer file it strongly depends on what technology you want to use for http server.
It does not have any extra logging but it should be easy to add, or maybe standard apache http logs will be enough for you.

What is the best way to use HTTP 2 with AWS Elastic beanstalk

I have a Ruby on Rails App hosted on AWS using Elastic-beanstalk which works with HTTP 1 now I want to use HTTP 2. Can someone suggest me the best approach
If I remember correctly when you add a new load balancer to your Elastic Beanstalk environment, it defaults to using a Classic Load Balancer, which doesn't support HTTP/2, I think the solution would be using an Application Load Balancer that does support it, you can find this info here. You can also specify it while creating your environment as you can see here. This will only allow HTTP/2 communication between the client and the ALB, your ALB will convert those HTTP/2 requests into HTTP/1.1 to communicate with your instance.
As seen here: "If end-to-end HTTP/2 is a requirement for your application you can use a Layer 4 ELB ( Classic Load Balancer with TCP listener or Network Load Balancer). If you are interested also in SSL offloading the only option for now is Classic Load Balancer with an SSL listener."

Customize Docker reverse DNS

I'm looking for a way to change what the reverse DNS resolves to in Docker.
If I set my container's FQDN to foo.bar I expect a reverse DNS lookup for its IP to resolve to foo.bar, but it always resolves to <container_name>.<network_name>.
Is there a way I can change that?
Docker's DNS support is designed to support container discovery within a cluster. It's not an application traffic management solution, so features are limited.
For example it's possible to configure a DNS wildcard which resolves "*.foo.bar" urls to a server running a container savvy load balancer solution (A load balancer that knows where all the containers, associated with each application, are located and running).
That load balancer can then route traffic based on the incoming "Hostname" HTTP header:
"app1.foo.bar" -> "App1 Container1", "App1 Container2"
"app2.foo.bar" ->
"App2 Container1", "App2 Container2", "App2 Container3"
For a practical implementation take a look at how Kubernetes does load balancing (This is an advanced topic):
http://kubernetes.io/docs/user-guide/ingress/

How can I use vhosts with the same port in kubernetes pod?

I have an existing web application with frontend and a backend which runs on the same port (HTTPS/443) on different sub domains. Do I really need a load balancer in my pod to handle all incoming web traffic or does Kubernetes has something already build in, which I missed so far?
I would encurage getting familiar with the concept of Ingress and IngressController http://kubernetes.io/docs/user-guide/ingress/
Simplifying things a bit, you can look at ingress as a sort of vhost/path service router/revproxy.

Resources