AWS CDK MSK get bootstrap server list for an existing cluster - aws-cdk

I am trying to deploy AWS Fargate services, written in Spring boot to consume messages from an existing MSK Kafka cluster using AWS CDK. I can get the ICluster reference using the method const kafkaCluster = msk.Cluster.fromClusterArn(...). But how do i get the bootstrap server URL for the application to use.
msk.Cluster class has a field "bootstrapBrokers" as mentioned here.but how can I get the bootstrap brokers list from the cluster information got via msk.Cluster.fromClusterArn(...) ?

Related

Spring Cloud Data Flow Stream Deployment to Cloud Foundry

I am new to spring cloud data flow. I am trying to build a simple http source and rabbitmq sink stream using SCDF stream app.The stream should be deployed on OSCF (Cloud Foundry). Once deployed, the stream should be able to receive HTTP POST Request and send the request data to RabbitMQ.
So far, I have downloaded Data Flow Server using below link and push to cloud foundry. I am using Shall application from my local.
https://dataflow.spring.io/docs/installation/cloudfoundry/cf-cli/.
I also have HTTP Source and RabbitMQ Sink application which is deployed in CF. RabbitMQ service is also bound to sink application.
My question - how can I create a stream using application deployed in CF? Registering app requires HTTP/File/Maven URI but I am not sure how can an app deployed on CF be registered?
Appreciate your help. Please let me know if more details are needed?
Thanks
If you're using the out-of-the-box apps that we ship, the relevant Maven repo configuration is already set within SCDF, so you can freely already deploy the http app, and SCDF would resolve and pull it from the Spring Maven repository and then deploy that application to CF.
However, if you're building custom apps, you can configure your internal/private Maven repositories in SCDF/Skipper and then register your apps using the coordinates from your internal repo.
If Maven is not a viable solution for you on CF, I have seen customers resolve artifacts from s3 buckets and persistent-volume services in CF.

Dask Gateway Setup on Azure

I am trying to setup dask gateway in AKS. Following the documentation I was able to start the dask gateway server in AKS. We have also hosted a separate jupyternotebook instace within the same cluster. When I try to access the gateway server from this jupyternotebook instance its failing with the below error :
In the dask gateway documentation it shows accessing the gateway server using an ip address. But in actual setup we would be using a url, right? How can I configure dask gateway helm chart for this

Spring Cloud Data Flow for Kubernetes - Could not configure multiple kafka brokers

I'm trying to migrate my SCDF local server deployments to the k8s-based solution. But I've got some problems when handling the server configuration of the kafka broker-list for the apps.
I followed the instructions here: https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.7.2.RELEASE/reference/htmlsingle
and downloaded the sample configuration from : https://github.com/spring-cloud/spring-cloud-dataflow-server-kubernetes at branch v1.7.2.RELEASE
Because we've already deployed a kafka cluster, I'd like to configure the broker- and zk-nodes in the server-config-kafka.yaml file so that we could use the same kafka cluster.
I configured my environmentVaribales like this:
deployer:
kubernetes:
environmentVariables: >
SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS='172.16.3.192:9092,172.16.3.193:9092,172.16.3.194:9092',
SPRING_CLOUD_STREAM_KAFKA_BINDER_ZK_NODES='172.16.3.192:2181,172.16.3.193:2181,172.16.3.194:2181'
but got an error when trying to deploy my SCDF stream:
Invalid environment variable declared: 172.16.3.193:9092
How should I configure it to make it work?
Thanks in advance.
Remove the > in your YAML
That's creating a block string, not a map of environment variables. In YAML, how do I break a string over multiple lines?
Also, if using CoreDNS in kubernetes, you should probably be using something like kafka.default.cluster.local for the value, rather than IP addresses, and similar for Zookeeper

How to integrate configuration files from a config service in rails?

I am currently running a rails application and a SpringBoot configuration service in the same local network. Is it possible to configure rails to use the config files provided by the service in Springboot?
More specifically I am looking to fetch the database connection and user data via the service and let rails connect to a remote database.
The service provides theses files via http as json or yml.
Thank you.
Edit: Solved it by using a bash script with wget to pull and assemble config files manually via container scripts that are executed before each deploy.

What is currently the best way to connect Rails on Google Container Engine (GKE) with Google Cloud SQL

The biggest problem I understand here is managing IPs. The rails pods IPs will be dynamic so will not be able to whitelist them. So how do I get Rails access to the Google Cloud SQL database in a secure way without knowing the IPs of the Rails containers?
You can run the Cloud SQL Proxy in your pod which will allow you to connect to Cloud SQL via a local UNIX socket: https://github.com/GoogleCloudPlatform/cloudsql-proxy#to-use-from-kubernetes

Resources