Flink consumer job is unable to connect to Kafka from Docker - docker

I have a simple streaming Flink Scala job which connects to a Kafka topic and maps its
org.apache.avro.generic.GenericRecord messages and map into Json string.
When it is running in IntelliJ it ingests the topic well and printing out the jsons.
When I run it in docker-compose I got the following exception:
com.esotericsoftware.kryo.KryoException: Error constructing instance of class: org.apache.avro.Schema$LockableArrayList
Serialization trace:
types (org.apache.avro.Schema$UnionSchema)
schema (org.apache.avro.Schema$Field)
fieldMap (org.apache.avro.Schema$RecordSchema)
schema (org.apache.avro.generic.GenericData$Record)
at com.twitter.chill.Instantiators$$anon$1.newInstance(KryoBase.scala:136)
at com.esotericsoftware.kryo.Kryo.newInstance(Kryo.java:1061)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.create(CollectionSerializer.java:89)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:93)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:22)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:143)
at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:21)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:657)
at org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:262)
at org.apache.flink.api.java.typeutils.runtime.TupleSerializer.copy(TupleSerializer.java:111)
at org.apache.flink.api.java.typeutils.runtime.TupleSerializer.copy(TupleSerializer.java:37)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:635)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:612)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:592)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:727)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:705)
at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.emitRecord(KafkaFetcher.java:185)
at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:150)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:715)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:208)
Caused by: java.lang.IllegalAccessException: Class com.twitter.chill.Instantiators$ can not access a member of class org.apache.avro.Schema$LockableArrayList with modifiers "public"
at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:102)
at java.lang.reflect.AccessibleObject.slowCheckMemberAccess(AccessibleObject.java:296)
at java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:288)
at java.lang.reflect.Constructor.newInstance(Constructor.java:413)
at com.twitter.chill.Instantiators$.$anonfun$normalJava$1(KryoBase.scala:170)
at com.twitter.chill.Instantiators$$anon$1.newInstance(KryoBase.scala:133)
... 37 more
I tried forcing Avro serialization with:
val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
env.getConfig.disableForceKryo()
env.getConfig.enableForceAvro()
but got the same error.
Based on [this][1] I'm using all Flink related dependencies as "provided" with no good result.
What can be the difference between running the job in the IDE and in
Docker?
How can I fix the job to be able to read the Kafka topic from
Docker?
How shall I setup Docker for this?
What can I handle Kryo/Avro serialization issue?
SS
[1]: http://www.alternatestack.com/development/com-esotericsoftware-kryo-kryoexception-unusual-solution-upgrading-flink/

Related

Spring Cloud Dataflow Kafka Topic creation/configuration

I created a Processor here I will call it my-transform.
And a Stream:
:my-topic > my-transform | log
And I configured the Processor not to auto create Topic with this config
and wrote the Code as a Function Bean.
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9093
auto-create-topics: false
function:
bindings:
transform-in-0: input
transform-out-0: output
The Problem is that the Topic (my-topic) is still created.
But it should not be created.
Instead, I want to create with another application.
So it can be configured with the right retention policy.
In my Kafka installation, autoCreateTopicsEnable was still enabled.

How to upload scan outputs from minIO (ex. Nmap, Nikto, Sslyze, Zap) to OWASP DefectDojo

I have problem uploading the findings of minIO securecodebox outputs to OWASP DefectDojo.
Screenshot of Error
https://drive.google.com/file/d/1PqVOazjr7r_1oMPf6SQsh8_iPFgnqkjC/view?usp=sharing
I try following these steps
https://github.com/DefectDojo/django-DefectDojo/blob/dev/readme-docs/KUBERNETES.md
then
https://docs.securecodebox.io/docs/hooks/defectdojo/
This is the link for the scanners
https://github.com/secureCodeBox/secureCodeBox/tree/main/scanners
The Error:
2022-03-07 07:23:54 INFO DefectDojoPersistenceProvider:35 - Downloading Scan Result ence provider
2022-03-07 07:23:56 INFO DefectDojoPersistenceProvider:39 - Uploading Findings to DefectDojo at: http://defectdojo.default.minikube.local:8080/ tDojo at: http://defectdojo.default.minikube.local:8080/
Exception in thread "main" org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://defectdojo.default.minikube.locarror on GET request for "http://defectdojo.default.minikube.local:8080/api/v2/users/": defectdojo.default.minikube.local; nested exception is java.net.UnknownHostException: defectdojo.default.minikube.local
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:785)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:751) rnalSearch(GenericDefectDojoService.java:151)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:621) ch(GenericDefectDojoService.java:167)
at io.securecodebox.persistence.defectdojo.service.GenericDefectDojoService.intechUnique(GenericDefectDojoService.java:187)rnalSearch(GenericDefectDojoService.java:151) ionedEngagementsStrategy.java:82)
at io.securecodebox.persistence.defectdojo.service.GenericDefectDojoService.search(GenericDefectDojoService.java:167)
at io.securecodebox.persistence.defectdojo.service.GenericDefectDojoService.searchUnique(GenericDefectDojoService.java:187)
at io.securecodebox.persistence.strategies.VersionedEngagementsStrategy.run(VersionedEngagementsStrategy.java:82)
at io.securecodebox.persistence.DefectDojoPersistenceProvider.main(DefectDojoPersistenceProvider.java:42)
Caused by: java.net.UnknownHostException: defectdojo.default.minikube.local
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:229)
at java.base/java.net.Socket.connect(Socket.java:609)
at java.base/java.net.Socket.connect(Socket.java:558)
at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:182)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)
at java.base/sun.net.www.http.HttpClient.(HttpClient.java:242)
at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:341)
at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:362)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1253)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081)
at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1015)
at org.springframework.http.client.SimpleBufferingClientHttpRequest.executeInternal(SimpleBufferingClientHttpRequest.java:76)
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48)
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:66)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:776)
... 7 more
Thank you for the reponse!
there is a dedicated DefectDojo Hook which will do it for you.
You just need to install in on a cluster with some basic configuration.
Installing the DefectDojo persistenceProvider hook will add a ReadAndWrite Hook to your namespace.
kubectl create secret generic defectdojo-credentials --from-literal="username=admin" --from-literal="apikey=08b7..."
helm upgrade --install dd secureCodeBox/persistence-defectdojo
--set="defectdojo.url=https://defectdojo-django.default.svc"
The hook will automatically import the scan results into an engagement in DefectDojo. If the engagement doesn't exist the hook will create the engagement (CI/CD engagement) and all objects required for it (product & product type). The hook will then pull the imported information from DefectDojo and use them to replace the findings inside secureCodeBox.
More https://docs.securecodebox.io/docs/hooks/defectdojo

Bound off-heap memory consumption in ksqlDB

I'm trying to bound somehow off-heap memory consumption in ksqlDB server. I found an article about this: https://www.confluent.io/blog/bounding-ksqldb-memory-usage. In first approach, I set KSQL_OPTS:
-Dksql.streams.rocksdb.config.setter=io.confluent.ksql.rocksdb.KsqlBoundedMemoryRocksDBConfigSetter
but I got:
[2020-11-16 07:25:15,848] ERROR Failed to start KSQL (io.confluent.ksql.rest.server.KsqlServerMain:60)
org.apache.kafka.common.config.ConfigException: Invalid value io.confluent.ksql.rocksdb.KsqlBoundedMemoryRocksDBConfigSetter for configuration rocksdb.config.setter: Class io.confluent.ksql.rocksdb.KsqlBoundedMemoryRocksDBConfigSetter could not be found.
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:728)
at io.confluent.ksql.config.ConfigItem$Resolved.parseValue(ConfigItem.java:125)
at io.confluent.ksql.util.KsqlConfig.lambda$resolveStreamsConfig$2(KsqlConfig.java:693)
at java.base/java.util.Optional.map(Optional.java:265)
at io.confluent.ksql.util.KsqlConfig.resolveStreamsConfig(KsqlConfig.java:693)
at io.confluent.ksql.util.KsqlConfig.lambda$applyStreamsConfig$0(KsqlConfig.java:675)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1746)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at io.confluent.ksql.util.KsqlConfig.applyStreamsConfig(KsqlConfig.java:678)
at io.confluent.ksql.util.KsqlConfig.buildStreamingConfig(KsqlConfig.java:701)
at io.confluent.ksql.util.KsqlConfig.<init>(KsqlConfig.java:738)
at io.confluent.ksql.util.KsqlConfig.<init>(KsqlConfig.java:708)
at io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:50)
I also tried to set this parameter in ksqldb-server.properties but result was the same. I'm using docker image: confluentinc/ksqldb-server:0.9.0. Is it correct way to do this in such a way or I miss something?
The ksqldb-rocksdb-config-setter jar isn't included in the Docker image. You can pull the jar in from confluent's maven repository.
See: https://github.com/confluentinc/ksql/issues/6644

%PARSER_ERROR while turning up the server

I have added 2 gradle dependencies for parsing the yaml contents and generate a json out of it but I am getting these kind of logs when I am trying to turn up my server.
%PARSER_ERROR[cyan] %PARSER_ERROR[gray] %PARSER_ERROR[highlight] %PARSER_ERROR[magenta] - Kafka version: 2.3.0
%PARSER_ERROR[cyan] %PARSER_ERROR[gray] %PARSER_ERROR[highlight] %PARSER_ERROR[magenta] - Kafka commitId: fc1aaa116b661c8a
%PARSER_ERROR[cyan] %PARSER_ERROR[gray] %PARSER_ERROR[highlight] %PARSER_ERROR[magenta] - Kafka startTimeMs: 1604911935830
From the Gradle dependency excluding the 'ch.qos.logback' group 'logback-core' module solved the issue.

get jmx attributes with jolokia telegraf

I have a JAVA application which I want to monitor its JMX attributes using telegraf tool.
The tool provides jolikia plugin to monitor JMX attributes. I have added following dependencies to my app's pom.xml file regarding Maven section of Jolokia documentation:
<dependency>
<groupId>org.jolokia</groupId>
<artifactId>jolokia-core</artifactId>
<version>1.3.7</version>
</dependency>
<dependency>
<groupId>org.jolokia</groupId>
<artifactId>jolokia-client-java</artifactId>
<version>1.3.7</version>
</dependency>
This is my /etc/telegraf/telegraf.conf file:
[[inputs.jolokia]]
context = "/jolokia/"
[[inputs.jolokia.servers]]
name = "wr-core"
host = "192.168.100.175"
port = "1998"
[[inputs.jolokia.metrics]]
name = "send_success"
mbean = "wr-core:type=monitor,name=execution"
attribute = "MessageSendSuccessCount"
The application is up in the provided IP/port (I can connect to it with jconsole). The application has a monitoring section which its object name (as shown in jconsole) is wr-core:type=monitor,name=execution and has the attribute MessageSendSuccessCount. But when I start telegraf service, following error occurs:
Jan 14 14:30:32 ZiZi telegraf[17258]: 2018-01-14T11:00:32Z E! Error in plugin [inputs.jolokia]: error performing request: Error decoding JSON response: invalid character '\x00' looking for beginning of value:
Note that 1998 is my app's jmx port. I also tried using 8778 which is jolokia-agent port; got:
Jan 14 14:40:03 ZiZi telegraf[9150]: 2018-01-14T11:10:03Z E! Error in plugin [inputs.jolokia]: error performing request: Post http://192.168.100.175:8778/jolokia/: dial tcp 192.168.100.175:8778: getsockopt: connection refused
EDIT 1:
I have checked my CLASSPATH and both jolokia-client and jolokia-core were listed: ../lib/jolokia-client-java-1.3.7.jar:../lib/jolokia-core-1.3.7.jar.
EDIT 2:
I have put following lines into my app's execution file:
JOLOKIA_OPTS=-javaagent:$LIB_PATH/jolokia-core-java-1.3.7.jar=port=8778,host=0.0.0.0
JAVA_OPTS="-mx4096M $JAVA_OPTS $JACOCO_OPTS $JOLOKIA_OPTS"
But when I run the file, I get this error (even though ../lib/jolokia-core-java-1.3.7.jar has been listed in the CLASSPATH):
Error opening zip file or JAR manifest missing : ../lib/jolokia-core-java-1.3.7.jar
Error occurred during initialization of VM
agent library failed to init: instrument
Found the solution.
I have skipped maven solution and tried the javaagent approach, but I had misunderstood the usage of javaagent previously; I should address jolokia jvm agent (this helped):
JOLOKIA_OPTS=-javaagent:/root/jolokia-jvm-1.3.7-agent.jar=port=8778,host=0.0.0.0
JAVA_OPTS="-mx4096M $JAVA_OPTS $JACOCO_OPTS $JOLOKIA_OPTS"
Now, my app starts with this log (successfully):
I> No access restrictor found, access to any MBean is allowed
Jolokia: Agent started with URL http://192.168.100.175:8778/jolokia/
In the other side there is no error in the telegraf console for jolokia anymore.
All the observation are implying the jolokia jvm library has been started and works successfully.
I have also found jolokia jmx documentation for using it as dependencies in the project; but since I'm not a JAVA expert (I'm testing the app), I prefer to currently use the javaagent approach and leave it for future study/experience. BTW, it may help the others.
EDIT 1:
I have found and deployed jolokia jvm agent using its spring support.
Configuring it in the spring XML file, I can now have jolokia jvm agent start listening at my app's startup.

Resources