ECS Fargate / single ALB / multiple docker containers - docker

Does anyone have an example of how I could build up an ECS cluster with a single application load balancer forwarding host header request to two different docker containers.
I want to have one ALB for A single ESC cluster running both my angular site as well as a.net web service. Ultimately my goal is to script this in terraform.

Without knowing all the information I think that you are looking for path-based routing or even better host-based routing.
Terraform
You need an aws_lb_listener_rule (Load Balancer Listener Rule) for each host/path.
You need an aws_alb_target_group for each ECS services and you refer the correct target group inside the resource aws_lb_listener_rule.
General
Listener Rules
Before you start using your Application Load Balancer, you must add one or more listeners. A listener is a process that checks for connection requests, using the protocol and port that you configure. The rules that you define for a listener determine how the load balancer routes request to the targets in one or more target groups. docs
Use Path-Based Routing with Your Application Load Balancer
https://docs.aws.amazon.com/en_us/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html
Examples
Host Based Listener Rule
resource "aws_lb_listener_rule" "host_based_routing" {
listener_arn = aws_lb_listener.front_end.arn
priority = 99
action {
type = "forward"
target_group_arn = aws_lb_target_group.static.arn
}
condition {
field = "host-header"
values = ["my-service.*.terraform.io"]
}
}
Where the conditions block define the host or the pattern (example below) where request must be sent.
Path Based Listener Rule
resource "aws_lb_listener_rule" "static" {
listener_arn = aws_lb_listener.front_end.arn
priority = 100
action {
type = "forward"
target_group_arn = aws_lb_target_group.static.arn
}
condition {
field = "path-pattern"
values = ["/static/*"]
}
}
Target group
resource "aws_alb_target_group" "alb_target_group" {
name = "example-target-group"
protocol = "HTTP"
port = var.exposed_port
vpc_id = var.vpc_id
deregistration_delay = 30
health_check {
path = var.service_health_check_path
matcher = "200-399"
}
}
https://www.terraform.io/docs/providers/aws/r/lb_listener_rule.html
https://www.terraform.io/docs/providers/aws/r/lb_target_group.html

Related

How to redirect from domain to another domain in Next.js v13 app on server side behind proxy

I do have many domains and I want to redirect them to one main domain.
List of domains:
example.com
example1.com
example2.com
example3.com
The main domain I want to redirect from other domains is example.com
There is a great answer to redirect Next.js on the server side to a different path.
Next.js >= 12 Now you can do redirects using middleware, create a _middleware.js file inside the pages folder (or any sub folder inside pages)
import { NextResponse, NextRequest } from 'next/server'
export async function middleware(req, ev) {
const { pathname } = req.nextUrl
if (pathname == '/') {
return NextResponse.redirect('/hello-nextjs')
}
return NextResponse.next() }
Source: https://stackoverflow.com/a/58182678/10826693
Note: For Next.js v13, you must create the middleware.js file in the root directory of Next.js instead of pages/_middleware.js as mentioned in that answer.
If I try to redirect to another domain, the TypeScript code in middleware.ts in the root looks like this:
/* eslint-disable #next/next/no-server-import-in-page */
import { NextResponse, NextRequest } from 'next/server'
export async function middleware(req: NextRequest) {
const url = req.nextUrl.clone()
console.log(url.host) //logs localhost:3000
if (!url.host.match(/example.com/)) {
url.host = 'example.com'
return NextResponse.redirect(url) //never executes because the URL is always localhost in the Docker container
}
return NextResponse.next()
}
However, a Next.js v13 application running in a Docker container behind a proxy server always has the localhost URL in the host. And url.host in a Docker container always equals the URL localhost with a defined port inside the container (e.g. localhost:3000).
How to redirect the domains example1.com, example2.com and example3.com to example.com including the same path, query parameters and hash when I only have localhost on the server side?
If you want to redirect all domains in the Docker container to the master domain, you need to get the redirected URL from the X-Forwarded-Host header.
In addition to the host, you must also set the correct port (80/443 - header called X-Forwarded-Port) and protocol (http/https - header called X-Forwarded-Proto).
Next.js v13 Create the middleware.ts file in the root of the app and redirect all domains that do not match the example.com domain to it in production.
/* eslint-disable #next/next/no-server-import-in-page */
import { NextRequest, NextResponse } from 'next/server';
export function middleware(request: NextRequest) {
const url = request.nextUrl.clone();
const isProduction = process.env.NODE_ENV === 'production' // redirect only in production
const requestedHost = request.headers.get('X-Forwarded-Host');
if (isProduction && requestedHost && !requestedHost.match(/example.com/)) {
const host = `example.com`; // set your main domain
const requestedPort = request.headers.get('X-Forwarded-Port');
const requestedProto = request.headers.get('X-Forwarded-Proto');
url.host = host;
url.protocol = requestedProto || url.protocol;
url.port = requestedPort || url.port;
return NextResponse.redirect(url);
}
return NextResponse.next();
}
Now all domains example1.com, example2.com and example3.com are redirected to example.com including the same path, query parameters and hash.

Creating a NetworkLoadBalancer with existing Elastic IPs

I'm trying to set a pair of Elastic IPs as the public facing addresses for a NetworkLoadBalancer object and running into issues. The console.log("CFN NLB"); line in the code below never executes because the load balancer definition throws the following error:
There are no 'Public' subnet groups in this VPC. Available types:
Subprocess exited with error 1
I'm doing it this way because there's no high-level way to assign existing Elastic IPs to a load balancer without using the Cfn escape hatch as discussed here.
If I enable the commented code in the NetworkLoadBalancer definition, the stack synths successfully but then I get the following when deploying:
You can specify either subnets or subnet mappings, not both (Service: AmazonElasticLoadBalancing; Status Code: 400; E
rror Code: ValidationError; Request ID: e4b90830-xxxx-4f13-8777-bcf56946781a; Proxy: null)
Code:
const pubSubnet1ID = 'subnet-xxxxxfa6d669cd496';
const pubSubnet2ID = 'subnet-xxxxxbaf8d2d77afb';
const pubSubnet1 = Subnet.fromSubnetId(this, 'pubSubnet1', pubSubnet1ID);
const pubSubnet2 = Subnet.fromSubnetId(this, 'pubSubnet2', pubSubnet2ID);
console.log("Tagging.");
Tags.of(pubSubnet1).add('aws-cdk:subnet-type', 'Public');
Tags.of(pubSubnet2).add('aws-cdk:subnet-type', 'Public');
console.log("Load Balancer...");
this.loadBalancer = new NetworkLoadBalancer(this, 'dnsLB', {
vpc: assets.vpc,
internetFacing: true,
crossZoneEnabled: true,
// vpcSubnets: {
// subnets: [pubSubnet1, pubSubnet2],
// },
});
console.log("CFN NLB");
this.cfnNLB = this.loadBalancer.node.defaultChild as CfnLoadBalancer;
console.log("Mappings");
const subnetMapping1: CfnLoadBalancer.SubnetMappingProperty = {
subnetId: pubSubnet1ID,
allocationId: assets.elasticIp1.attrAllocationId,
}
const subnetMapping2: CfnLoadBalancer.SubnetMappingProperty = {
subnetId: pubSubnet2ID,
allocationId: assets.elasticIp2.attrAllocationId,
}
console.log("Mapping assignment");
this.cfnNLB.subnetMappings = [subnetMapping1, subnetMapping2];
I've found references to CDK wanting a tag of aws-cdk:subnet-type with a value of Public and added that tag to our public subnets (both manually and programmatically), but the error remains unchanged.
I found the solution. Uncommenting the vpcSubnets: part of the loadBalancer definition allowed me to get past the first error message. To get around the "You can specify either subnets or subnet mappings, not both" message, I added
this.cfnNLB.addDeletionOverride('Properties.Subnets');
before setting the subnetMappings attribute.

How can I activate JMX for Caffeine cache

I was looking at this SO question, but here I only want Caffeine to start reporting to JMX.
I have added an application.conf file as such and referenced it via -Dconfig.file:
caffeine.jcache {
# A named cache is configured by nesting a new definition under the caffeine.jcache namespace. The
# per-cache configuration is overlaid on top of the default configuration.
default {
# The monitoring configuration
monitoring {
# If JCache statistics should be recorded and externalized via JMX
statistics = true
# If the configuration should be externalized via JMX
management = true
}
}
It is not working, but I suspect it might be related to jcache, but not sure what is the expected way to implement this basic monitoring.
The cache instance is registered with the MBean server when it is instantiated by the CacheManager. The following test uses the programmatic api for test simplicity.
public final class JmxTest {
#Test
public void jmx() throws MalformedObjectNameException {
var config = new CaffeineConfiguration<>();
config.setManagementEnabled(true);
config.setStatisticsEnabled(true);
var manager = Caching.getCachingProvider().getCacheManager();
var cache = manager.createCache("jmx", config);
cache.put(1, 2);
var server = ManagementFactory.getPlatformMBeanServer();
String name = String.format("javax.cache:type=%s,CacheManager=%s,Cache=%s",
"CacheStatistics", manager.getURI().toString(), cache.getName());
var stats = JMX.newMBeanProxy(server, new ObjectName(name), CacheStatisticsMXBean.class);
assertThat(stats.getCachePuts()).isEqualTo(1);
}
}
If you do not need JCache for an integration then you will likely prefer to use the native APIs and metrics library. It is supported by Micrometer, Dropwizard Metrics, and the Prometheus client. While JCache is great for framework integrations, its api is rigid and cause surprising performance issues.

Azure Cloud Service - Configure Session from RoleEnvironment

Our application is hosted as a Cloud Service in Azure and we have all our connection strings and other connection-like settings defined in the ServiceConfiguration files. We are also using a Redis Cache as the session state store. We are trying to specify the Redis Cache host and access key in the ServiceConfig and then use those values for the deployment depending on where the bits land. The problem is session is defined in the web.config and we can't pull RoleEnvironment settings into the web.config.
We tried altering the web.config in the Application_Startup method but get errors that access is denied to the web.config on startup, which makes sense.
We don't really want to write deployment scripts to give the Network Service user access to the web.config.
Is there a way to setup session to use a different Redis Cache at runtime of the application?
The error that we are getting is "Access to the path 'E:\sitesroot\0\web.config' is denied'. I read an article that gave some examples on how to give the Network Service user access to the web.config as part of the role starting process and did that and then now we have access to the file but now get the following error "Unable to save config to file 'E:\sitesroot\0\web.config'."
We ended up being able to solve this using the ServerManager API in the WebRole.OnStart method. We did something like this:
using (var server = new ServerManager())
{
try
{
Site site = server.Sites[RoleEnvironment.CurrentRoleInstance.Id + "_Web"];
string physicalPath = site.Applications["/"].VirtualDirectories["/"].PhysicalPath;
string webConfigPath = Path.Combine(physicalPath, "web.config");
var doc = System.Xml.Linq.XDocument.Load(webConfigPath);
var redisCacheProviderSettings = doc.Descendants("sessionState").Single().Descendants("providers").Single().Descendants("add").Single();
redisCacheProviderSettings.SetAttributeValue("host", RoleEnvironment.GetConfigurationSettingValue("SessionRedisCacheHost"));
redisCacheProviderSettings.SetAttributeValue("accessKey", RoleEnvironment.GetConfigurationSettingValue("SessionRedisCacheAccessKey"));
redisCacheProviderSettings.SetAttributeValue("ssl", "true");
redisCacheProviderSettings.SetAttributeValue("throwOnError", "false");
doc.Save(webConfigPath);
}
catch (Exception ex)
{
// Log error
}
}

How to configure exclusive consumer with Grails and JMS / ActiveMQ?

I have a Grails app that subscribes to a given ActiveMQ topic using the JMS plugin. How can I make the TestService class an exclusive consumer? Details of exclusive consumer here
The use case is that I am running the consumer on AWS EC2 and the ActiveMQ feed has a durability of 5 mins and it takes longer than this to replace the instance if it dies. I can't afford to lose messages and message order must be preserved, hence I wish to use multiple instances, where the first instance to connect will be the one that the broker sends every message, and the others are sat in reserve. In the event of the first instance dying, the AMQ broker will send the messages to one of the other instances.
Also, what criteria are used by JMS to determine when an exclusive consumer has died or gone away?
// resources.groovy
beans = {
jmsConnectionFactory(org.apache.activemq.ActiveMQConnectionFactory) {
brokerURL top://example.com:1234
userName = 'user'
password = 'password'
}
}
class TestService {
static exposes = ["jms"]
static destination = "SOME_TOPIC_NAME"
static isTopic = true
def onMessage(msg) {
// handle message
// explicitly return null to prevent unwanted replyTo attempt
return null
}
}
First of all, your example uses topics, that won't work; you want queues:
class TestService {
static expose = ["jms"]
static destination = "MYQUEUE"
...
}
Configuring exclusive consumers in ActiveMQ is straightforward:
queue = new ActiveMQQueue("MYQUEUE?consumer.exclusive=true");
..but may be tricky with the Grails plugin; you can try these:
class TestService {
static expose = ["jms"]
static destination = "MYQUEUE?consumer.exclusive=true"
def onMessage(msg){ ...}
}
class TestService {
static expose = ["jms"]
#Queue(
name = "MYQUEUE?consumer.exclusive=true"
)
def handleMessage(msg){ ...}
}
Regarding your question on how the broker determines if a consumer dies, I'm not sure how it's done exactly in ActiveMQ, but in most JMS implementations, TCP failures trigger an exception on the connection; the peer (the broker in this case) handles the exception and fails over to the next available consumer.
Hope that helps

Resources