Transition from Kafka Connect (Docker based) to AWS MSK Connect - docker

I am using this repo as a reference, and I've successfully got it up and running locally: https://github.com/rhaycock/Kafka-Connect-POC/tree/master/kafka-connect-main
I'm trying to set up a connector in MSK Connect and I am using this sink spec: https://github.com/rhaycock/Kafka-Connect-POC/blob/master/kafka-connect-main/config/sink/gcs-connector.json
There are also a lot of environment variables in the docker-compose.yml file: https://github.com/rhaycock/Kafka-Connect-POC/blob/master/kafka-connect-main/docker-compose.yml
My question at this point, is how do I get all of these variables from the Docker compose file into an MSK connector? Do they go in the "Connector Configuration" in MSK? Or the Worker Configuration? Or somewhere else? Specifically I need ones like CONNECT_GROUP_ID, CONNECT_CONFIG_STORAGE_TOPIC, CONNECT_OFFSET_STORAGE_TOPIC, CONNECT_STATUS_STORAGE_TOPIC, among a few others.

The majority of environment variables in the docker-compose.yml file you linked to are managed by the MSK Connect service
When using MSK Connect you have control over properties in the following locations:
Connector configuration
Worker configuration (provided the properties in the defined allow-list here)
Other properties, such as those which dictate how the Connect framework is configured (e.g KAFKA_CONNECT_MODE in the docker-compose file) are managed by MSK Connect.
Considering the docker-compose.yml file you linked, the list below maps the env vars to properties that are customizable in MSK Connect. Properties excluded from this list are currently service managed:
KAFKA_BOOTSTRAP_SERVERS -> kafkaCluster.apacheKafkaCluster.bootstrapServers in the CreateConnector request.
AWS_ROLE_ARN -> serviceExecutionRoleArn in the CreateConnector request.
CONNECT_OFFSET_STORAGE_TOPIC -> offset.storage.topic in a Worker Configuration resource, learn more here.
One final note, the README.md of the repository you linked also states:
Copy any required jar files required for CLASSPATH and update the .env with the correct details
In MSK Connect you can achieve this by creating a Custom Plugin resource with the JARs you want to bundle (e.g your connector implementation)

Related

Setting Grafana domain in Docker container

I'm running Grafana from the docker image on docker hub here (v6.7.4). I would like to add a notification to Microsoft Teams and have the links direct back to the domain I am hosting Grafana on.
I have added the MSTeams webhook to Grafana, and it successfully sends notifications. Now, when I click on "view rule" in the notification, it opens localhost:3000 since that is the default domain for Grafana.
In trying to configure this to point to grafana.my.domain, I have followed this configuration of the Grafana Docker image as well as looked at the configuration file settings, specifically the domain and root_url settings.
Based on the docker configuration, I have tried passing GF_SERVER_DOMAIN=grafana.my.domain, as well as settings for GF_SERVER_SERVER_FORM_SUB_PATH, GF_SERVER_ROOT_URL, and most combinations of those. I have also attempted to alter a sample.ini file that is shipped with the docker container to include the block:
[server]
domain = grafana.my.domain
I then mounted the .ini file as /grafana/config.ini:/etc/grafana/grafana.ini (based on this) in my docker-compose file, but it did not pick up on it.
Still, when the notification is clicked on within Teams, I get directed to localhost:3000. Am I missing something with the configuration here? It seems passing the environment variable is all that should be needed based on the documentation.

How to integrate configuration files from a config service in rails?

I am currently running a rails application and a SpringBoot configuration service in the same local network. Is it possible to configure rails to use the config files provided by the service in Springboot?
More specifically I am looking to fetch the database connection and user data via the service and let rails connect to a remote database.
The service provides theses files via http as json or yml.
Thank you.
Edit: Solved it by using a bash script with wget to pull and assemble config files manually via container scripts that are executed before each deploy.

How to give a meaningful hostname (XYZ.com), replacing "localhost:8080" in Jenkins

I recently started working with Jenkins for CI/CD for ASP.net applications.
I have built a Jenkins server and created the required jobs with appropriate plugins.
My Jenkins is configured with localhost:8080 initially.
Later I was asked to give a meaningful host name (xyz.com), so that with in the organization all IT employees can access without logging in to the server to access Jenkins.
I have already tried to change the configurations under manage Jenkins -> Configure -> Jenkins Location (Jenkins URL), changed it to "localhost:80" and added "127.0.0.1 xyz.com" in the host file.
I have changed the port number in Jenkins.xml file even.
It didn't work for me because I had to browse "xyz.com:8080"
My final result should be "xyz.com" and should be accessible for the entire team.(I will take care of DNS).
Any help would be appreciated.
Thanks in advance.

Adding IHS server details under ant script

I have an ant script to deploy the EAR file to my Websphere application server. This server is under clustered environment and has a cell with its respective nodes.
I also have an IHS server above this WAS instance which my application uses.
Requesting to kindly guide as to how the ant script can be used to deploy the EAR file on cluster by providing the required IHS server details.
Thanks
You should have a look at the section in $WASHOME/bin/configureWebserverDefinition.jacl that iterates through the existing web modules and maps them to the newly created webserver.
In Jython, from the manual:
AdminApp.edit('myapp', ['-MapModulesToServers',
[['.*', '.*', '-WebSphere:cell=mycell,node=mynode,server=server1']]])
But you'd need to substitue the full name for your webserver.

Running an Ant script to prepare a Database in Bluemix

I have an Ant script that I use to populate/prepare a database. All I need is to set the host, port and credentials for the database. It works fine for MySQL and DB2, the DB just need to be reachable from were the script is executed.
The DB service in Bluemix gives me a DB with an IP (75.x.x.x) that is only reachable from the internal network of Bluemix, it is not accessible externally.
My understanding is that my ant script needs to be executed from inside the Bluemix network/servers.
How can I do that?
What would be the alternatives?
I'm considering to create a NodeJS script to trigger that ant internally, but I'm not sure if it will work properly.
dashDB always had the ability for local clients (outside of Bluemix) to connect to the cloud database, and SQL Database later added the feature as well. So you should be able to populate a database as long as you have the correct driver client installed on your local machine.
Can you provide more details on how you tested that the IP is not reachable? Is there a firewall put in place between your local machine and Bluemix? Note that ping is not a good test because the port is blocked for security reasons. You may try the JDBC port indicated on the connection page from the console.
See link for instructions on how to make a connection:
https://www.ng.bluemix.net/docs/#services/SQLDB/index.html#connecting-to-sqldb
You might be able to use a simple custom buildpack. You can start with a sample like this one:
https://github.com/dmikusa-pivotal/cf-test-buildpack
fork it and modify the bin/compile script to run your ant task instead. Then put your ant script (and probably executable as I expect it is not installed in the Bluemix environment) in a directory and run
cf push <appname> -b <your forked git url>
To push it to Bluemix and run it. If you're just using it once you can probably get away with hard-coding the address and credentials, or else you can bind to the same service instance and get the info from VCAP_SERVICES.

Resources