Spring Boot: how to idiomatically configure Schema Registry Serdes in spring-kafka - avro

Are there examples of configuring SpecificAvroSerdes (or any schema registry-based serdes - JsonSchema and Protobuf) in spring-kafka that allow leveraging some of the autoconfiguration (based on yaml or properties files).
There are a few similar questions in SO like How to use Spring-Kafka to read AVRO message with Confluent Schema registry?
But I want to be a bit specific on Kafka Streams serdes and declarative configuration of the Serdes.
Thank you

Looks like Kafka Streams can be configured with the default Serde.
See StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG and StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG.
According Spring Boot convention we can provide any arbitrary properties from YAML file: https://docs.spring.io/spring-boot/docs/2.4.0/reference/html/spring-boot-features.html#boot-features-kafka-extra-props
So, probably your SpecificAvroSerdes can be configured in the application.properties like this:
spring.kafka.streams.properties[default.value.serde]=io.confluent.kafka.streams.serdes.avro.SpecificAvroSerdes

Related

Equivalent Docker.DotNet AuthConfig class in KubernetesClient for Dotnet application

I have a Dokerswarm application(.Net) which is using Authconfig class for storing information [username, passord, serveraddress, tokens etc] for authenticating with the registries. The same application I am trying to write in Kubernetes using KubernetesClient.
Can someone please let me know if there is any equivalent of Authconfig class in Kubernetes K8s.Model client also ?
The similar class for creating connection to the k8s APIServer endpoint would be the following:
KubernetesClientConfiguration (in case you have proper KUBECONFIG environment variable set, or at least k8s config on the disk)
More specific classes could be found in the folder:
csharp/src/KubernetesClient/KubeConfigModels/
Usage examples could be found here:
csharp/examples/
I would also recommend to read the following documentation pages:
Access Clusters Using the Kubernetes API
Configure Access to Multiple Clusters

Configure Spring Cloud Task to use Kafa of Spring Cloud Data Flow server

I have a Spring Cloud Data Flow (SCDF) server running on Kubernetes cluster with Kafka as the message broker. Now I am trying to launch a Spring Cloud Task (SCT) that writes to a topic in Kafka. I would like the SCT to use the same Kafka that SCDF is using. This brings up two questions that I have and hope they can be answered:
How to configure the SCT to use the same Kafka as SCDF?
Is it possible to configure the SCT so that the Kafka server uri can be passed to the SCT automatically when it launches, similar to
the data source properties that get passed to the SCT at launch?
As I could not find any examples on how to achieve this, help is very appreciated.
Edit: My own answer
This is how I get it working for my case. My SCT requires spring.kafka.bootstrap-servers to be supplied. From SCDF's shell, I provide it as an argument --spring.kafka.bootstrap-servers=${KAFKA_SERVICE_HOST}:${KAFKA_SERVICE_PORT}, where KAFKA_SERVICE_HOST and KAFKA_SERVICE_PORT are environment variables created by SCDF's k8s setup script.
This is how to launch the task within SCDF's shell
dataflow:>task launch --name sample-task --arguments "--spring.kafka.bootstrap-servers=${KAFKA_SERVICE_HOST}:${KAFKA_SERVICE_PORT}"
You may want to review the Spring Cloud Task Events section in the reference guide.
The expectation is that you'd choose the binder of choice and pack that library in the Task application's classpath. With that dependency, you'd then configure the application with Spring Cloud Stream's Kafka binder properties such as the spring.cloud.stream.kafka.binder.brokers and others that are relevant to connect to the existing Kafka cluster.
Upon launching the Task application (from SCDF) with these configurations, you'd be able to publish or receive events in your Task app.
Alternatively, with the Kafka-binder in the classpath of the Task application, you can define the Kafka binder properties to all the Task's launched by SCDF via global configuration. See Common Application Properties in the ref. guide for more information. In this model, you don't have to configure each of the Task application with Kafka properties explicitly, but instead, SCDF would propagate them automatically when it launches the Tasks. Keep in mind that these properties would be supplied to all the Task launches.

Spring Cloud Data Flow for Kubernetes - Could not configure multiple kafka brokers

I'm trying to migrate my SCDF local server deployments to the k8s-based solution. But I've got some problems when handling the server configuration of the kafka broker-list for the apps.
I followed the instructions here: https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.7.2.RELEASE/reference/htmlsingle
and downloaded the sample configuration from : https://github.com/spring-cloud/spring-cloud-dataflow-server-kubernetes at branch v1.7.2.RELEASE
Because we've already deployed a kafka cluster, I'd like to configure the broker- and zk-nodes in the server-config-kafka.yaml file so that we could use the same kafka cluster.
I configured my environmentVaribales like this:
deployer:
kubernetes:
environmentVariables: >
SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS='172.16.3.192:9092,172.16.3.193:9092,172.16.3.194:9092',
SPRING_CLOUD_STREAM_KAFKA_BINDER_ZK_NODES='172.16.3.192:2181,172.16.3.193:2181,172.16.3.194:2181'
but got an error when trying to deploy my SCDF stream:
Invalid environment variable declared: 172.16.3.193:9092
How should I configure it to make it work?
Thanks in advance.
Remove the > in your YAML
That's creating a block string, not a map of environment variables. In YAML, how do I break a string over multiple lines?
Also, if using CoreDNS in kubernetes, you should probably be using something like kafka.default.cluster.local for the value, rather than IP addresses, and similar for Zookeeper

Grafana plugin activation

Is there any way to enable a grafana plugin within its configuration files?
I am using grafana v5 or v4
It looks like you have to login and then click then enable button
I found a workaround for my problem running docker with a volume that would map the default sqlite db for grafana in /var/lib/grafana/grafana.db
this would keep any configuration, dashboard, datasource firstly set up in the web interface
You can use the Zabbix Plugin for Grafana.
You need to install and enable the plugin, then configure the Zabbix datasource:
URL: http://yourserver/zabbix/api_jsonrpc.php
access: proxy or direct, depends on the reachability of Zabbix and grafana servers
username: use a dedicated one, with the required read permissions
And you're ready to create dashboards by referencing groups, applications, hosts and items.
Here you can find the getting starded guide, it's quite complete.
The regexp & templating features are really powerful, I advise to read it carefully.

Dataflow 1.2.0 YAML configuration changes

Yesterday I upgraded my development environment to Spring Cloud Dataflow 1.2.0 and all of my sink/source apps dependencies.
I have two main issues:
javaOpts: -Xmx128m is not longer being picked up, so locally deployed apps have the default Xmx value.
Here is the format of my previously working Dataflow yaml config.
See full here: https://pastebin.com/p1JmLnLJ
spring:
cloud:
dataflow:
applicationProperties:
stream:
spring:
cloud:
deployer:
local:
javaOpts: -Xmx128m
Kafka config options like ssl.truststore.location etc. are not being read correctly. Another stackoverflow post indicated these must be marked like this "[ssl.truststore.location]". Is there some documented working yaml config or list of breaking changes with 1.2.0? The file based authentication block was also moved, and I was able to figure that one out.
Yes, It looks like a bug in Spring Cloud Local Deployer to consider the common application properties passed via args. Created https://github.com/spring-cloud/spring-cloud-deployer-local/issues/48 to track this.

Resources