I have the following configuration for serverless Lambda which is supposed to be triggered by a Kafka MSK.
Using Serverless 2.72.2
Yet when deploying I get the error event[0] unsupported function event
kafkaConsumer:
role: 'some_arn'
handler: kafkaConsumer.trigger
name: some-kafka-consumer
events:
- msk:
arn: 'kafka_cluster_arn'
topic: 'kafka_topic_name'
Please advise what I'm not configuring properly.
it seems like you might be using a version of the Framework that does not support msk event definition. It was added in 2.3.0 release: https://github.com/serverless/serverless/blob/master/CHANGELOG.md#230-2020-09-25
Related
I created a simple api (using serverless) which is protected by an apikey (when deployed via $ serverless deploy). However, for local development ($ serverless offline) I do not want to use an api key. How can I disable this for local only?
This is my serverless.yml:
service: my-service
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs16.x
region: eu-central-1
apiGateway:
apiKeys:
- name: my-apikey
value: ${ssm:my-apikey}
functions:
myfunc:
handler: src/v1/myfunc/index.get
events:
- http:
path: /v1/myfunc
method: get
private: true
plugins:
- serverless-esbuild
- serverless-offline
- serverless-dotenv-plugin
Note: I am aware that I could simply set private: false when doing local development but this is quite tedious when there is a long list of functions.
The solution was to use the --noAuth option:
serverless offline --noAuth
I created a Processor here I will call it my-transform.
And a Stream:
:my-topic > my-transform | log
And I configured the Processor not to auto create Topic with this config
and wrote the Code as a Function Bean.
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9093
auto-create-topics: false
function:
bindings:
transform-in-0: input
transform-out-0: output
The Problem is that the Topic (my-topic) is still created.
But it should not be created.
Instead, I want to create with another application.
So it can be configured with the right retention policy.
In my Kafka installation, autoCreateTopicsEnable was still enabled.
I am using SCDF deployment on k8s and trying to add a new Task Application from our internal Maven repo. By default, SCDF seems to only lookup in the [springRepo] repository. I followed the documentation to add a new maven repo here .
Since the documentation only talks about CloudFoundry example, I added these lines to application.yaml section based on my understanding.
spring:
cloud:
dataflow:
task:
platform:
local:
accounts:
localDev:
********
datasource:
uri: xxx
*********
maven:
remote-repositories:
repo1:
url: https://repo1
auth:
username: user1
password: pass1
snapshot-policy:
update-policy: daily
checksum-policy: warn
release-policy:
update-policy: never
checksum-policy: fail
While adding the app I used the syntax : maven://:[:[:]]:. However, when I launch the task, it fails with error : Failed to resolve maven Resource XXX at configured Remote Repository : [springRepo]
How can I override it to search in my newly added repo.. why SCDF still only searching in default [springRepo]? Appreciate any help.
The property prefix is maven.remote-repositories but what you have is spring.maven.remote-repositories.
You need to specify:
spring:
cloud:
dataflow:
task:
platform:
local:
accounts:
localDev:
********
datasource:
uri: xxx
*********
maven:
remote-repositories:
repo1:
url: https://repo1
...
Please note that the Kubernetes deployment works with containers rather than maven jar artifacts and hence, you need to have your apps registered with the app's URI using docker: prefix.
I have a simple streaming Flink Scala job which connects to a Kafka topic and maps its
org.apache.avro.generic.GenericRecord messages and map into Json string.
When it is running in IntelliJ it ingests the topic well and printing out the jsons.
When I run it in docker-compose I got the following exception:
com.esotericsoftware.kryo.KryoException: Error constructing instance of class: org.apache.avro.Schema$LockableArrayList
Serialization trace:
types (org.apache.avro.Schema$UnionSchema)
schema (org.apache.avro.Schema$Field)
fieldMap (org.apache.avro.Schema$RecordSchema)
schema (org.apache.avro.generic.GenericData$Record)
at com.twitter.chill.Instantiators$$anon$1.newInstance(KryoBase.scala:136)
at com.esotericsoftware.kryo.Kryo.newInstance(Kryo.java:1061)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.create(CollectionSerializer.java:89)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:93)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:22)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:143)
at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:21)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:657)
at org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:262)
at org.apache.flink.api.java.typeutils.runtime.TupleSerializer.copy(TupleSerializer.java:111)
at org.apache.flink.api.java.typeutils.runtime.TupleSerializer.copy(TupleSerializer.java:37)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:635)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:612)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:592)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:727)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:705)
at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.emitRecord(KafkaFetcher.java:185)
at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:150)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:715)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:208)
Caused by: java.lang.IllegalAccessException: Class com.twitter.chill.Instantiators$ can not access a member of class org.apache.avro.Schema$LockableArrayList with modifiers "public"
at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:102)
at java.lang.reflect.AccessibleObject.slowCheckMemberAccess(AccessibleObject.java:296)
at java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:288)
at java.lang.reflect.Constructor.newInstance(Constructor.java:413)
at com.twitter.chill.Instantiators$.$anonfun$normalJava$1(KryoBase.scala:170)
at com.twitter.chill.Instantiators$$anon$1.newInstance(KryoBase.scala:133)
... 37 more
I tried forcing Avro serialization with:
val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
env.getConfig.disableForceKryo()
env.getConfig.enableForceAvro()
but got the same error.
Based on [this][1] I'm using all Flink related dependencies as "provided" with no good result.
What can be the difference between running the job in the IDE and in
Docker?
How can I fix the job to be able to read the Kafka topic from
Docker?
How shall I setup Docker for this?
What can I handle Kryo/Avro serialization issue?
SS
[1]: http://www.alternatestack.com/development/com-esotericsoftware-kryo-kryoexception-unusual-solution-upgrading-flink/
PROBLEM
I cannot get serverless offline to run when not connected to internet.
serverless.yml
service: my-app
plugins:
- serverless-offline
# run on port 4000, because client runs on 3000
custom:
serverless-offline:
port: 4000
# app and org for use with dashboard.serverless.com
app: my-app
org: my-org
provider:
name: aws
runtime: nodejs10.x
functions:
getData:
handler: data-service.getData
events:
- http:
path: data/get
method: get
cors: true
isOffline: true
saveData:
handler: data-service.saveData
events:
- http:
path: data/save
method: put
cors: true
isOffline: true
To launch serverless offline, I run serverless offline start in terminal. This works when I am connected to the internet, but when offline, I get the following errors:
Console Error
:4000/data/get:1 Failed to load resource: net::ERR_CONNECTION_REFUSED
20:34:02.820 localhost/:1 Uncaught (in promise) TypeError: Failed to fetch
Terminal Error
FetchError: request to https://api.serverless.com/core/tenants/{tenant}/applications/my-app/profileValue failed, reason: getaddrinfo ENOTFOUND api.serverless.com api.serverless.com:443
Request
I suspect the cause is because I am not sure how to setup offline using instruction: "The event object passed to your λs has one extra key: { isOffline: true }. Also, process.env.IS_OFFLINE is true."
Any assistance on how to debug the issue would be much appreciated.
Probably you already fix it, but the problem is because app and org attribute
# app and org for use with dashboard.serverless.com
app: my-app
org: my-org
When you use it, serverless will use config set on serverless.com, commonly env var.
To use env var, you can use plugin serverless-dotenv-plugin. This way, you don't need to connect on internet.