Provision AlbController by using AWS CDK - aws-cdk

I'm using AWS CDK to launch resources, I have a VPC and a basic EKS provisioned fine.
Then, I want to expose my services by using NLB ( as this article https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/)
I got an issue with AWS Load Balancer Controller when deployed(https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/), it says "another operation (install/upgrade/rollback) is in progress ..."
[newly added] my code to provision like this
const albController = new eks.AlbController(this, 'mobaAlbController', {
cluster: this.cluster,
version: eks.AlbControllerVersion.V2_4_1
});
Version:
CDK: 2.41.0 (build 6ad48a3)
eks.KubernetesVersion.V1_21
eks.AlbControllerVersion.V2_4_1
Any workaround please?

Related

How do I Configure AWS Distro for OpenTelemetry to send data to Datadog in Pulumi?

I want to send logs, metrics, and trace data from my Java code that are placed inside the steinko/helloworld-backend docker container to Datadog. I am using AWS Distro for OpenTelemetry container as a side car. To configure the Datadog exporter with a YAML file config.yaml.
I place these two components in an ECS Fargate Service by using Pulumi code:
export const service = new awsx.ecs.FargateService("backend", {
taskDefinitionArgs: {
containers: {
otelCollector: {
image:"docker.io/amazon/aws-otel-collector"
},
backend: {
image: 'steinko/helloworld-backend',
},
dependsOn: [ {
containerName: "otelCollector",
condition: "START"
} ]
}
}
How do I configure a config.yaml file to docker.io/amazon/aws-otel-collector in Pulumi code?
ADOT PM here, thanks for raising this topic. In our ECS integration we support configurations for AWS X-Ray (traces) and Amazon Managed Service for Prometheus/Amazon CloudWatch (metrics), that is, your use case is currently not supported out-of-the-box. You would need to explicitly set up the ADOT collector (incl. volume for the config; see the examples as a starting point).
If you'd like to create a feature request for out-of-the-box support, please use our public roadmap. Also, note that ADOT at the moment does not yet support logs.

AWS CDK MSK get bootstrap server list for an existing cluster

I am trying to deploy AWS Fargate services, written in Spring boot to consume messages from an existing MSK Kafka cluster using AWS CDK. I can get the ICluster reference using the method const kafkaCluster = msk.Cluster.fromClusterArn(...). But how do i get the bootstrap server URL for the application to use.
msk.Cluster class has a field "bootstrapBrokers" as mentioned here.but how can I get the bootstrap brokers list from the cluster information got via msk.Cluster.fromClusterArn(...) ?

Run Ambassador in local dev environment without Kubernetes

I am trying to run Ambassador API gateway on my local dev environment so I would simulate what I'll end up with on production - the difference is that on prod my solution will be running in Kubernetes. To do so, I'm installing Ambassador into Docker Desktop and adding the required configuration to route requests to my microservices. Unfortunately, it did not work for me and I'm getting the error below:
upstream connect error or disconnect/reset before headers. reset reason: connection failure
I assume that's due to an issue in the mapping file, which is as follows:
apiVersion: ambassador/v2
kind: Mapping
name: institutions_mapping
prefix: /ins/
service: localhost:44332
So what I'm basically trying to do is rewrite all requests coming to http://{ambassador_url}/ins to a service running locally in IIS Express (through Visual Studio) on port 44332.
What am I missing?
I think you may be better off using another one of Ambassador Labs tools called Telepresence.
https://www.telepresence.io/
With Telepresence you can take your local service you have running on localhost and project it into your cluster to see how it performs. This way you don't need to spin up a local cluster, and can get real time feedback on how your service operates with other services in the cluster.

Is there a way in AWS CDK to associate a CodeBuild project with a VPC, subnet and security group?

I have written a CDK script (typescript) to create a AWS CodeBuild project. However the build needs to access the internet so needs explicit VPC, security group and subnets set. I cannot see a way to do this. I notice that I can associate VPC, subnet and securitygroup with CodeBuild project after creation using aws cli but this is not ideal. Has anyone found a way to do this directly in CDK?
using CDK version 0.26.0
I worked out how to do this. Below is the code where project is the CodeBuild project object.
// associate the VPC, securitygroup and subnets with the codebuild
const projectVpc = project.node.findChild('Resource') as codebuild.CfnProject;
projectVpc.propertyOverrides.vpcConfig = {
vpcId: "vpc-xxxxxx",
securityGroupIds: ["sg-xxxxxx],
subnets: ["subnet-xxxxx1","subnet-xxxxx2"]
}

Spring Cloud Data Flow for Kubernetes - Could not configure multiple kafka brokers

I'm trying to migrate my SCDF local server deployments to the k8s-based solution. But I've got some problems when handling the server configuration of the kafka broker-list for the apps.
I followed the instructions here: https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.7.2.RELEASE/reference/htmlsingle
and downloaded the sample configuration from : https://github.com/spring-cloud/spring-cloud-dataflow-server-kubernetes at branch v1.7.2.RELEASE
Because we've already deployed a kafka cluster, I'd like to configure the broker- and zk-nodes in the server-config-kafka.yaml file so that we could use the same kafka cluster.
I configured my environmentVaribales like this:
deployer:
kubernetes:
environmentVariables: >
SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS='172.16.3.192:9092,172.16.3.193:9092,172.16.3.194:9092',
SPRING_CLOUD_STREAM_KAFKA_BINDER_ZK_NODES='172.16.3.192:2181,172.16.3.193:2181,172.16.3.194:2181'
but got an error when trying to deploy my SCDF stream:
Invalid environment variable declared: 172.16.3.193:9092
How should I configure it to make it work?
Thanks in advance.
Remove the > in your YAML
That's creating a block string, not a map of environment variables. In YAML, how do I break a string over multiple lines?
Also, if using CoreDNS in kubernetes, you should probably be using something like kafka.default.cluster.local for the value, rather than IP addresses, and similar for Zookeeper

Resources