Creating a NetworkLoadBalancer with existing Elastic IPs - aws-cdk

I'm trying to set a pair of Elastic IPs as the public facing addresses for a NetworkLoadBalancer object and running into issues. The console.log("CFN NLB"); line in the code below never executes because the load balancer definition throws the following error:
There are no 'Public' subnet groups in this VPC. Available types:
Subprocess exited with error 1
I'm doing it this way because there's no high-level way to assign existing Elastic IPs to a load balancer without using the Cfn escape hatch as discussed here.
If I enable the commented code in the NetworkLoadBalancer definition, the stack synths successfully but then I get the following when deploying:
You can specify either subnets or subnet mappings, not both (Service: AmazonElasticLoadBalancing; Status Code: 400; E
rror Code: ValidationError; Request ID: e4b90830-xxxx-4f13-8777-bcf56946781a; Proxy: null)
Code:
const pubSubnet1ID = 'subnet-xxxxxfa6d669cd496';
const pubSubnet2ID = 'subnet-xxxxxbaf8d2d77afb';
const pubSubnet1 = Subnet.fromSubnetId(this, 'pubSubnet1', pubSubnet1ID);
const pubSubnet2 = Subnet.fromSubnetId(this, 'pubSubnet2', pubSubnet2ID);
console.log("Tagging.");
Tags.of(pubSubnet1).add('aws-cdk:subnet-type', 'Public');
Tags.of(pubSubnet2).add('aws-cdk:subnet-type', 'Public');
console.log("Load Balancer...");
this.loadBalancer = new NetworkLoadBalancer(this, 'dnsLB', {
vpc: assets.vpc,
internetFacing: true,
crossZoneEnabled: true,
// vpcSubnets: {
// subnets: [pubSubnet1, pubSubnet2],
// },
});
console.log("CFN NLB");
this.cfnNLB = this.loadBalancer.node.defaultChild as CfnLoadBalancer;
console.log("Mappings");
const subnetMapping1: CfnLoadBalancer.SubnetMappingProperty = {
subnetId: pubSubnet1ID,
allocationId: assets.elasticIp1.attrAllocationId,
}
const subnetMapping2: CfnLoadBalancer.SubnetMappingProperty = {
subnetId: pubSubnet2ID,
allocationId: assets.elasticIp2.attrAllocationId,
}
console.log("Mapping assignment");
this.cfnNLB.subnetMappings = [subnetMapping1, subnetMapping2];
I've found references to CDK wanting a tag of aws-cdk:subnet-type with a value of Public and added that tag to our public subnets (both manually and programmatically), but the error remains unchanged.

I found the solution. Uncommenting the vpcSubnets: part of the loadBalancer definition allowed me to get past the first error message. To get around the "You can specify either subnets or subnet mappings, not both" message, I added
this.cfnNLB.addDeletionOverride('Properties.Subnets');
before setting the subnetMappings attribute.

Related

ECS Fargate / single ALB / multiple docker containers

Does anyone have an example of how I could build up an ECS cluster with a single application load balancer forwarding host header request to two different docker containers.
I want to have one ALB for A single ESC cluster running both my angular site as well as a.net web service. Ultimately my goal is to script this in terraform.
Without knowing all the information I think that you are looking for path-based routing or even better host-based routing.
Terraform
You need an aws_lb_listener_rule (Load Balancer Listener Rule) for each host/path.
You need an aws_alb_target_group for each ECS services and you refer the correct target group inside the resource aws_lb_listener_rule.
General
Listener Rules
Before you start using your Application Load Balancer, you must add one or more listeners. A listener is a process that checks for connection requests, using the protocol and port that you configure. The rules that you define for a listener determine how the load balancer routes request to the targets in one or more target groups. docs
Use Path-Based Routing with Your Application Load Balancer
https://docs.aws.amazon.com/en_us/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html
Examples
Host Based Listener Rule
resource "aws_lb_listener_rule" "host_based_routing" {
listener_arn = aws_lb_listener.front_end.arn
priority = 99
action {
type = "forward"
target_group_arn = aws_lb_target_group.static.arn
}
condition {
field = "host-header"
values = ["my-service.*.terraform.io"]
}
}
Where the conditions block define the host or the pattern (example below) where request must be sent.
Path Based Listener Rule
resource "aws_lb_listener_rule" "static" {
listener_arn = aws_lb_listener.front_end.arn
priority = 100
action {
type = "forward"
target_group_arn = aws_lb_target_group.static.arn
}
condition {
field = "path-pattern"
values = ["/static/*"]
}
}
Target group
resource "aws_alb_target_group" "alb_target_group" {
name = "example-target-group"
protocol = "HTTP"
port = var.exposed_port
vpc_id = var.vpc_id
deregistration_delay = 30
health_check {
path = var.service_health_check_path
matcher = "200-399"
}
}
https://www.terraform.io/docs/providers/aws/r/lb_listener_rule.html
https://www.terraform.io/docs/providers/aws/r/lb_target_group.html

Strip prefix to remove the environment passed in URL

My frontend application attaches an environment attribute as a prefix in the URI -- I am having difficulty getting Zuul to strip this prefix.
For example, the frontend application is making a request to Zuul proxy locally http://localhost:9080/local/domains/metrs/subdomains/medicare/base-templates/. Zuul needs to strip out the environment "local" as seen in the URL and then forward to the web service located (with the environment "local" stripped) at http://localhost:9090/icews/admin/domains/metrs/subdomains/medicare/base-templates/
Unfortunately, I get a 404 error because the prefix "local" is not being stripped -- when reading the logs on the server hosting the web service, I still see the environment "local" forwarded in the call.
Here is my Zuul configuration:
ice.ws.local.url=http://localhost:9090/icews/admin
ice.ws.dev.url=http://developmentserver/icews/admin
zuul.routes.base-templates.path=/local/**/base-templates/**
zuul.routes.base-templates.url=${ice.ws.local.url}
zuul.routes.base-templates.strip-prefix=true
zuul.routes.base-templates.path=/dev/**/base-templates/**
zuul.routes.base-templates.url=${ice.ws.dev.url}
zuul.routes.base-templates.strip-prefix=true
UPDATE
I went ahead and used the approach as outlined here: https://github.com/spring-cloud/spring-cloud-netflix/issues/1893 and https://github.com/spring-cloud/spring-cloud-netflix/issues/2408
I did not use the strip-prefix in the application.properties but instead followed the approach outlined here https://github.com/spring-cloud/spring-cloud-netflix/issues/1893
Specifically for my case where I have a frontend that needs to route on different environments (each being a different microservice), I have another Zuul filter that runs in order 6 (PRE_DECORATION_FILTER_ORDER + 1):
#Component
public class PostDecorFilter extends ZuulFilter {
#Value("${zuul.PostSendResponseCustomFilter.post.disable:false}")
private boolean isDisableFilter;
#Override
public int filterOrder() {
// run this filter after zuul..PreDecorationFilter
return PRE_DECORATION_FILTER_ORDER + 1;
}
#Override
public String filterType() {
return PRE_TYPE;
}
#Override
public boolean shouldFilter() {
return !isDisableFilter;
}
/**
* Add custom response processing here
*
* #return
*/
#Override
public Object run() {
RequestContext context = RequestContext.getCurrentContext();
// environment is added by frontend Angular app, e.g. dev, dev-t, qa, qa-t
String environment = RequestContext.getCurrentContext().getRequest().getRequestURI().split("/")[1];
if ( "dev".equals(environment) || "dev-t".equals(environment) || "qa".equals(environment) || "qa-t".equals(environment) ) {
// e.g. /dev/some/resource/
String requestUriWithEnvironment = RequestContext.getCurrentContext().getRequest().getRequestURI();
// e.g. /some/resource/
String requestUriWithoutEnvironment =
requestUriWithEnvironment.substring(environment.length() + 1);
context.put(REQUEST_URI_KEY, requestUriWithoutEnvironment);
}
return null;
}
I also had to use a CustomRouteLocator as outlined here: https://github.com/spring-cloud/spring-cloud-netflix/issues/2408
The reason is that the paths in application.properties are
ice.ws.local.url=http://localhost:9090/icews/admin
ice.ws.dev.url=http://developmentserver/icews/admin
zuul.routes.base-templates.path=/local/**/base-templates/**
zuul.routes.base-templates.url=${ice.ws.local.url}
zuul.routes.base-templates.path=/dev/**/base-templates/**
zuul.routes.base-templates.url=${ice.ws.dev.url}
Essentially, I was experiencing the same behavior as outlined here: "it seems that SimpleRouteLocator iterates over all routes, not in the order they're in config file and just pick the first match" https://github.com/spring-cloud/spring-cloud-netflix/issues/2408

use existing vpc and security group when adding an ec2 instance

There is lots of example code, but the rapidly improving cdk package isn't helping me find working examples of some (I thought) simple things. eg., even an import I found in an example fails:
import { VpcNetworkRef } from '#aws-cdk/aws-ec2';
error TS2724: Module '"../node_modules/#aws-cdk/aws-ec2/lib"' has no exported member 'VpcNetworkRef'. Did you mean 'IVpcNetwork'?
Why does the example ec2 code not show creation of raw ec2 instances?
WHAT would help is example cdk code that uses hardcoded VpcId and SecurityGroupId (I'll pass these in as context values) to create a pair of new subnets (ie., 1 for each availability zone) into which we place a pair of EC2 instances.
Again, the target VPC and SecurityGroup for the instances already exist. We just (today) create new subnets as we add new sets of EC2 instances.
We have lots of distinct environments (sets of aws infrastructure) that currently share a single account, VPC, and security group. This will change, but my current goal is to see if we can use the cloud dev kit to create new distinct environments in this existing model. We have a CF template today.
I can't tell where to start. The examples for referencing existing VPCs aren't compiling.
import { VpcNetworkRef } from '#aws-cdk/aws-ec2';
const vpc = VpcNetworkRef.import(this, 'unused', {vpcId, availabilityZones: ['unused']});
Again, the target VPC and SecurityGroup for the instances already exist. We just (today) create new subnets as we add new sets of EC2 instances.
-----edit-------->
Discussions on gitter helped me answer this and how to add a bare Instance
const vpc - ec2.VpcNetwork.import(this, 'YOUR-VPC-NAME', {
vpcId: 'your-vpc-id',
availabilityZones: ['list', 'some', 'zones'],
publicSubnetIds: ['list', 'some', 'subnets'],
privateSubnetIds: ['list', 'some', 'more'],
});
const sg = ec2.SecurityGroup.import(this, 'YOUR-SG-NAME', {
securityGroupId: 'your-sg-id'
});
// can add subnets to existing..
const newSubnet = new ec2.VpcSubnet(this, "a name", {
availablityZone: "us-west-2b",
cidrBlock: "a.b.c.d/e",
vpcId: vpc.vpcId
});
// add bare instance
new ec2.CfnInstance(this, "instance name", {
imageId: "an ami",
securityGroupIds: [sg.securityGroupId],
subnetId: newSubnet.subnetId,
instanceType: "an instance type",
tags: [{ key: "key", value: "value"}]
});
No further answers needed... for me.
import ec2 = require('#aws-cdk/aws-ec2');
// looking up a VPC by its name
const vpc = ec2.Vpc.fromLookup(this, 'VPC', {
vpcName: 'VPC-Name'
});
// looking up an SG by its ID
const sg = ec2.SecurityGroup.fromSecurityGroupId(this, 'SG', 'SG-ID')
// creating the EC2 instance
const instance = new ec2.Instance(this, 'Instance', {
vpc: vpc,
securityGroup: sg,
instanceType: new ec2.InstanceType('m4.large'),
machineImage: new ec2.GenericLinuxImage({
'us-east-1': 'ami-abcdef' // <- add your ami-region mapping here
}),
});
I was running into the issue of importing an existing vpc / subnet / security group as well. I believe it's changed a bit since the original post. Here is how to do it as of v1.18.0:
import cdk, { Construct, Stack, Subnet, StackProps } from '#aws-cdk/core';
import { SecurityGroup, SubnetType, Vpc } from "#aws-cdk/aws-ec2";
const stackProps: StackProps = {
env: {
region: 'your region',
account: 'your account'
},
};
export class MyStack extends Stack {
constructor(scope: Construct, id: string) {
super(scope, id, stackProps);
const vpc = Vpc.fromVpcAttributes(this, 'vpc', {
vpcId: 'your vpc id',
availabilityZones: ['your region'],
privateSubnetIds: ['your subnet id']
});
//Get subnets that already exists off your current vpc.
const subnets = vpc.selectSubnets({subnetType: SubnetType.PRIVATE});
//Create a subnet in the existing vpc
const newSubnet = new Subnet(this, 'subnet', {
availabilityZone: 'your zone',
cidrBlock: 'a.b.c.d/e',
vpcId: vpc.vpcId
});
//Get an existing security group.
const securityGroup = SecurityGroup.fromSecurityGroupId(this, 'securitygroup', 'your security group id');
}
}

NopCommerce - How Can I use wsdl Service Reference in the plugin

Recently I'm working on a payment plugin for zarinpal.com gate way.
I have to add a service reference by this URL :
https://sandbox.zarinpal.com/pg/services/WebGate/wsdl
Every thing is ok but in PostProcessPayment I face with the error :
Could not find default endpoint element that references contract
'ServiceReference.PaymentGatewayImplementationServicePortType'
in the ServiceModel client configuration section. This might be
because no configuration file was found for your application,
or because no endpoint element matching this contract could be found
in the client element.
and this is my PostProcessPayment method :
public void PostProcessPayment(PostProcessPaymentRequest postProcessPaymentRequest)
{
string urlToRedirect = "";
var zarinpal =
new ServiceReference.PaymentGatewayImplementationServicePortTypeClient();
string outResult = "";
int code = zarinpal.PaymentRequest("5607e960-d64c-4a8b-b03b-0e645bef37d4"
, 2500, "Our Test Store Name"
, "test#gmail.com", "0999999999"
, "http://" + _webHelper.GetStoreLocation(false)
+ "/Plugins/ZarinPal/PDTHandler", out outResult);// test
if (code == 100)
{
urlToRedirect = string.Concat("https://sandbox.zarinpal.com/pg/StartPay/"
, outResult);
}
_httpContext.Response.Redirect(urlToRedirect);
}
I would better to mention that if i add the Service Reference to Nop.Web Project too, it works well, but I want to build this as a module and adding the Service Reference Manually to Nop.Web is Unpleasant.
could you please help me ?

How to configure exclusive consumer with Grails and JMS / ActiveMQ?

I have a Grails app that subscribes to a given ActiveMQ topic using the JMS plugin. How can I make the TestService class an exclusive consumer? Details of exclusive consumer here
The use case is that I am running the consumer on AWS EC2 and the ActiveMQ feed has a durability of 5 mins and it takes longer than this to replace the instance if it dies. I can't afford to lose messages and message order must be preserved, hence I wish to use multiple instances, where the first instance to connect will be the one that the broker sends every message, and the others are sat in reserve. In the event of the first instance dying, the AMQ broker will send the messages to one of the other instances.
Also, what criteria are used by JMS to determine when an exclusive consumer has died or gone away?
// resources.groovy
beans = {
jmsConnectionFactory(org.apache.activemq.ActiveMQConnectionFactory) {
brokerURL top://example.com:1234
userName = 'user'
password = 'password'
}
}
class TestService {
static exposes = ["jms"]
static destination = "SOME_TOPIC_NAME"
static isTopic = true
def onMessage(msg) {
// handle message
// explicitly return null to prevent unwanted replyTo attempt
return null
}
}
First of all, your example uses topics, that won't work; you want queues:
class TestService {
static expose = ["jms"]
static destination = "MYQUEUE"
...
}
Configuring exclusive consumers in ActiveMQ is straightforward:
queue = new ActiveMQQueue("MYQUEUE?consumer.exclusive=true");
..but may be tricky with the Grails plugin; you can try these:
class TestService {
static expose = ["jms"]
static destination = "MYQUEUE?consumer.exclusive=true"
def onMessage(msg){ ...}
}
class TestService {
static expose = ["jms"]
#Queue(
name = "MYQUEUE?consumer.exclusive=true"
)
def handleMessage(msg){ ...}
}
Regarding your question on how the broker determines if a consumer dies, I'm not sure how it's done exactly in ActiveMQ, but in most JMS implementations, TCP failures trigger an exception on the connection; the peer (the broker in this case) handles the exception and fails over to the next available consumer.
Hope that helps

Resources