I have a streaming topic in Json with 50 fields. I try to create another stream with 1 field using KSQL from the topic as below:
create stream data (timeGMT string) with (kafka_topic='json_data', value_format='json');
The stream was created successfully, however no data returns from below KSQL query:
select * from data;
This is running on KSQL 5.0.0
There are a few things to check, including:
Is there any data in the topic?
PRINT 'json_data' FROM BEGINNING;
Have you told KSQL to read from the beginning of the topic?
SET 'auto.offset.reset' = 'earliest';
Are there messages in your topic that aren't JSON or can't be parsed? Check the KSQL Server log for errors.
You can see more information on these, and other troubleshooting tips, in this blog.
Related
Hello there I have setup successfully inbound webhook with strongGrid in net core 3.1.
The endpoint gets called and I want to parse value inside the attachment which is csv file.
The code I am using is following
var parser = new WebhookParser();
var inboundEmail = await parser.ParseInboundEmailWebhookAsync(Request.Body).ConfigureAwait(false);
await _emailSender.SendEmailAsyncWithSendGrid("info#mydomain.com", "ParseWebhook1", inboundEmail.Attachments.First().Data.ToString());
Please note I am sending an email as I don t know how to debug webhook with sendgrid as I am not aware of any cli.
but this line apparently is not what I am looking for
inboundEmail.Attachments.First().Data.ToString()
I am getting this on my email
Id = a3e6a543-2aee-4ffe-a36a-a53k95921998, Tag = HttpMultipartParser.MultipartFormDataParser.ParseStreamAsync, Length = 530 bytes
the csv I need to parse has 3 fields Sku productname and quantity I'd like to get sku values.
Any help would be appreciated.
The .Data property contains a Stream and invoking ToString on a stream object does not return its content. The proper way to read the content of a stream in C# is something like this:
var streamReader = new StreamReader(inboundEmail.Attachments.First().Data);
var attachmentContent = await streamReader.ReadToEndAsync().ConfigureAwait(false);
As far as parsing the CSV, there are literally thousands of projects on GitHub and hundreds on NuGet with the keyword 'CSV'. I'm sure one of them will fit your needs.
I am trying to collect energy generation statistics like Watts and wattHour form external API. I have External Rest API endpoint available for it.
Is there a way in Thingsboard using rule chain to call external endpoint and store its as telemetry data. Later i want to show this data in dashboards.
I know it has been too long, but thingsboard has lacking documentation and it might be useful for someone else.
You'd have to use the REST API CALL external node (https://thingsboard.io/docs/user-guide/rule-engine-2-0/external-nodes/#rest-api-call-node)
If the Node was successful, it will output it's OutboundMessage containing the HTTP Response, with the metadata containing:
- metadata.status
- metadata.statusCode
- metadata.statusReason
and with the payload of the message containing the response body from your external REST service (i.e. your stored telemetry).
You then have to use a script transformation node in order to format the metadata, payload and msgType, into the POST_TELEMETRY_REQUEST message format, see: https://thingsboard.io/docs/user-guide/rule-engine-2-0/overview/#predefined-message-types
Your external REST API should provide the correct "deviceName" and "deviceType", as well as the "ts" in UNIX milliseconds.
Notice that you also need to change the messageType (msgType return variable) to "POST_TELEMETRY_REQUEST".
Finally, just transmit the result to the Save timeseries action node and it will be stored as a telemetry from the specified device. Hope this helps.
I started kafka connector using following command:
./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-postgres/connect-postgres.properties
Serialization props in the connect-avro-standalone.properties is:
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
I've created a java backend which listen to this kafka stream topic and its able to get the data from postgres with each add/update/delete.
But the data is coming in some unknown encoding format and that's why ican't read the data correctly.
Here is the relevant code snippet:
properties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG,
Serdes.String().getClass().getName());
properties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG,
Serdes.ByteArray().getClass().getName());
StreamsBuilder streamsBuilder = new StreamsBuilder();
final Serde<String> stringSerde = Serdes.String();
final Serde<byte[]> byteSerde = Serdes.ByteArray();
streamsBuilder.stream(Pattern.compile(getTopic()), Consumed.with(stringSerde, byteSerde))
.mapValues(data -> {
System.out.println("->"+new String(data));
return data;
});
I'm confused on where and what I need to change; in the avro connector prop or in the java side code
Your Kafka Connect config here means that the messages on the Kafka topic will be Avro serialised:
value.converter=io.confluent.connect.avro.AvroConverter
Which means that you need to deserialise using Avro in your Streams app. See here for more details: https://docs.confluent.io/current/streams/developer-guide/datatypes.html#avro
I use grafana(3.0.3) fetching cloudwatch data, and I want to store fetched data inside InfluxDB(0.13). Any idea how I can do so, thank you in advance.
Telegraf has a CloudWatch plugin that you can use to store CloudWatch data in InfluxDB.
Alternatively if you're fetching the data yourself you could write the data to InfluxDB yourself. To do so, you'd issue an POST request to the /write endpoint of your InfluxDB instance with some your data in line protocol
Some examples of data in line protocol:
cpu,host=server01,region=uswest value=1 1434055562000000000
cpu,host=server02,region=uswest value=3 1434055562000010000
temperature,machine=unit42,type=assembly internal=32,external=100 1434055562000000035
temperature,machine=unit143,type=assembly internal=22,external=130 1434055562005000035
The text here: https://developers.google.com/fusiontables/docs/v2/migration_guide implies that the 10MB limit not in effect for API v2, or that an alternative service "Media download" could be used for large responses.
The API Reference here: https://developers.google.com/fusiontables/docs/v2/reference/ does not have any information regarding the 10MB limit, or how you use "media download" to recieve your request.
How do I work around the 10MB limit for Fusion Tables API v2? I can't seem to find documentation that explains it.
To use media-download simply add the parameter alt=media to the URL
For those who use Google's API Client Libraries, the 'media download' is specified by using a specific method. For the Python library, there are two versions of the SQL query methods: sql*() and sql*_media() (and this is very likely true for the other client libraries as well).
Example usage:
# Build the googleapiclient service
FusionTables = build('fusiontables', 'v2', credentials=credentials);
query = 'select * from <table id>';
# "standard" query, returning fusiontables#sqlresponse JSON:
jsonRequest = FusionTables.query().sqlGet(sql = query);
jsonResponse = jsonRequest.execute();
# alt=media query, returning a CSV-formatted bytestring (in Python, at least):
bytestrRequest = FusionTables.query().sqlGet_media(sql = query);
byteResponse = bytestrRequest.execute();
As Kerry mentions here, media format queries that are too large to be sent as a GET request will fail (while regular format queries of the same length succeed provided the query result is less than 10 MB). In the python client, this failure appears as a HTTP 502: Bad Gateway error.
Also note that ROWIDs are currently not returned in the media response format.