ksql - enum and struct support for protobuf - ksqldb

I have the following message structure:
syntax = "proto3";
package com.test;
option java_multiple_files = true;
message Event1 {
string event1 = 1;
}
message Event2 {
string event2 = 1;
}
message Events {
oneof oneof_type {
Event1 event1 = 1;
Event2 event2 = 2;
}
}
I created the following stream:
CREATE STREAM TEST_STREAM (
oneof_type STRING,
event1 STRUCT<event1 STRING>,
event2 STRUCT<event2 STRING>
) WITH (
KAFKA_TOPIC = 'nano-events2',
VALUE_FORMAT = 'PROTOBUF'
);
And got null for all columns:
Stream log shows following:
How can I map this message into stream?

This question was cross-posted on ksqlDB GitHub page: https://github.com/confluentinc/ksql/issues/7830
Seems there is a bug: https://github.com/confluentinc/ksql/issues/5265

Related

Ads script: errors when create keywords CONCURRENT_MODIFICATION without any modification

I have a script which copy ads, keywords, negativeKeywords, sitelinks, callouts, shippets from master account to slave account
Everything works properly. But when I create keywords in slave account I receive errors
[CONCURRENT_MODIFICATION : DatabaseError.CONCURRENT_MODIFICATION : ]
Most weird that from 10 execution of script 5 can be executed without any error but 5 with error
function copyKeywords(slaveGroup, masterKeywordsData, replicationSettings) {
var count = Object.keys(masterKeywordsData).length;
l('Copying %s keywords...', count);
for (var i = 0; i < count; i++) {
var key = Object.keys(masterKeywordsData)[i];
var masterKeywordData = masterKeywordsData[key];
log('%s/%s - Replicating keyword %s...', (parseInt(i) + 1), count, masterKeywordData.id);
log('Keyword data: %s', JSON.stringify(masterKeywordData));
var slaveKeyword = null;
var finalUrl = null;
if(masterKeywordData.finalUrl) {
finalUrl = formateUrl(masterKeywordData.finalUrl, replicationSettings);
}
slaveKeyword = slaveGroup
.newKeywordBuilder()
.withText(masterKeywordData.text);
if(finalUrl) {
slaveKeyword = slaveKeyword
.withFinalUrl(finalUrl);
}
slaveKeyword = slaveKeyword
.build();
if (slaveKeyword == null) {
log('Nothing was replicated');
} else {
if (slaveKeyword.isSuccessful()) {
log('Keyword %s successfuly replicated', masterKeywordData.id);
} else {
log(slaveKeyword.getErrors());
error('Cannot replicate keyword %s ', masterKeywordData.id);
}
}
sleep(2000);
}
}
With 2000 milliseconds sleep I receive errors not so often. But sometimes it happens(
Does anybody know why I receive error about
[CONCURRENT_MODIFICATION : DatabaseError.CONCURRENT_MODIFICATION : ]
cause I do not make any modification

How to mark notification as read after I receive cloud kit response?

From CloudKit I receive example response:
[AnyHashable("aps"): {
"content-available" = 1;
}, AnyHashable("ck"): {
ce = 2;
cid = "iCloud.pl.blueworld.fieldservice";
nid = "bb501155-f914-4f8b-b58e-4c24921727f5";
qry = {
dbs = 2;
fo = 3;
rid = "27C8C222-11B3-4831-A125-EAFEF8DB6ADD";
sid = "E32D6B20-3F81-464E-ACA8-2DD29FA93CF3";
zid = "_defaultZone";
zoid = "_defaultOwner";
};
}]
At any time I can fetch missed notifications and mark them as read to prevent from receiving it again.
Simply I do it like this:
OperationQueue().addOperation(CKMarkNotificationsReadOperation(notificationIDsToMarkRead: [queryNotification.notificationID!]))
nid field links to notification connected with this response.
After I receive CloudKit response I need to mark this notification as read. All I have is String, but operation to mark it as read expects CKNotificationID object. How can I create such object?

Clob field not populated in object in grailsChange changeset using grails DSL

I am making my first migration script that involves the use of the Groovy/Grails DSL. It looks like this:
import my.package.MyObject
import my.package.MyObjectUtil
import javax.xml.bind.DatatypeConverter
databaseChangeLog = {
changeSet(author: 'wheresjim', id: '1387238199-1') {
comment {'Sets the timestamp in each MyObject where null using the message text'}
grailsChange {
change {
MyObjectUtil myObjectUtil = new MyObjectUtil()
def criteria = MyObject.where {
isNull("timestamp")
}
def PAGESIZE = 10
int numRows = criteria.count()
int pages = Math.ceil(numRows / PAGESIZE)
(pages..0).each { page ->
int offset = PAGESIZE * page + PAGESIZE
def data = criteria.list(offset: offset, max: PAGESIZE, sort: 'id', order: 'asc')
data.each { MyObject myObject ->
Date timestamp = new Date(0L)
try {
def thisMessage = myObjectUtil.createMyObjectFromMessage(myObject.messageText)
String dateStr = thisMessage.messageIdentification?.timestamp
timestamp = dateStr ? DatatypeConverter.parseDateTime(dateStr).getTime() : timestamp
} catch (Exception e) {
// Do nothing, this will be logged in the finally which catches another error condition
} finally {
if (timestamp == new Date(0L)) {
log.warn "Error attempting to set timestamp in MyObject ${myObject.id}, setting to $eventDateTime"
}
}
myObject.timestamp = timestamp
myObject.save(flush: true)
}
log.warn "Updated ${myObject.id}"
}
}
}
}
}
The MyObject.messageText is a clob in the database, and to my knowledge, there has been no effort to lazily load it.
I should note that this exact script works (it can find the clob text) using the grails console plugin on the app.
In MyObject make sure your have the following lines:
static mapping = {
messageText type: 'text'
}

Protocol message tag had invalid wire type

my application sends data through protobuf from a server to a client.
When I am deserializing the sent payload on the client side eclipse throws a expection of the follogwing type:
Exception in thread "main" com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.
The expection happens when I call "parseFrom()". I know that in most cases the error lies in a protobuf file with the wrong syntax. Therefore I hope that it is enough to post the protobuf definition here:
package protobuf;
option java_package = "com.carproject.abs.demo.protobuf";
option java_outer_classname = "DesktopDevice_getCarsResponse";
message CARS {
required int64 carid = 1;
required string carname = 2;
message Carinformation {
required string street = 1;
required string postalcode = 2;
required string city = 3;
required string country = 4;
required string cartimezoneid = 5;
}
message Right {
optional string name = 1;
optional int32 type = 2;
optional int32 service = 3;
}
message PropertyType {
optional string name = 1;
optional string value = 2;
}
repeated Carinformation carinformation = 3;
repeated Right carrights = 4;
repeated PropertyType carproperties = 5;
repeated string inoid = 6;
}
here is the code which shows how the data is written on the server side:
// carObj returns the necessary Strings
CAR carObj = car.getCAR();
Builder newBuilder = DesktopDevice_getCarResponse.CAR.newBuilder();
newBuilder.setCarid( carObj.getCARID() );
newBuilder.setCarname( carObj.getCARNAME());
// hardcoded values here
newBuilder.getCarinformationBuilder(1).setStreet( carObj.getCARNFORMATION().getSTREET() );
newBuilder.getCarinformationBuilder(1).setPostalcode( carObj.getCARINFORMATION().getPOSTALCODE() );
newBuilder.getCarinformationBuilder(1).setCity( carObj.getCARINFORMATION().getCITY() );
newBuilder.getCarinformationBuilder(1).setCountry( fleetObj.getCARINFORMATION().getCOUNTRY() );
newBuilder.getCarinformationBuilder(1).setCartimezoneid( fleetObj.getCARINFORMATION().getCARTIMEZONEID() );
byte[] responsePayload = newBuilder.build().toByteArray();
RestServerResponseMessage responseMsg = new RestServerResponseMessage( requestMsg.getRequestId(), responsePayload, "XML");
return responseMsg;
As you can see, the Server uses the Builder pattern provided by protobuf to set the necessary strings. Then the data is serialized as a byte[] and is sent back to the client via protobuf.
Here is the client code where I try to parse the data.
HttpEntity entity = response.getEntity();
if (entity != null) {
InputStream instream = entity.getContent();
CAR car = DesktopDevice_getCarsResponse.CARS.parseFrom(instream);
}
The exception is thrown when .parseFrom is called. Server code and carObj are working fine. I already sucessfully sending protobuf data in my program.
Use Base64.getEncoder.encode(protoMsg.toByteArray) on server. And use Base64.getDecoder.decode(receivedArray[Bytes]). Data sent on wire should always be encoded and decoded

TCP client stream

I'm comunicationg with a email gateway. That gateway has an specific ip and port.
The requests the gateway are JSON formated and the gateway normally responds first whith an proceeding state and then with a confirmation or error state, represented also in JSON.
The code to make the requests and receive the response is:
using System;
using System.IO;
using System.Net;
using System.Net.Sockets;
using System.Text;
using System.Collections.Generic;
using System.Threading;
using Microsoft.Win32;
public class TcpClientSample
{
public static void SendMessage(TcpClient client, string msg)
{
Console.WriteLine("REQUEST:" + msg);
NetworkStream stream = client.GetStream();
byte[] myWriteBuffer = Encoding.ASCII.GetBytes(msg);
stream.Write(myWriteBuffer, 0, myWriteBuffer.Length);
byte[] myWriteBuffer2 = Encoding.ASCII.GetBytes("\r\n");
stream.Write(myWriteBuffer2, 0, myWriteBuffer2.Length);
string gResponse = "";
BinaryReader r = new BinaryReader(stream);
int receivedMessages = 0;
while (true)
{
while (true)
{
char currentChar = r.ReadChar();
if (currentChar == '\n')
break;
else
gResponse = gResponse + currentChar;
}
if (gResponse != "")
{
Console.WriteLine("RESPONSE:" + gResponse);
receivedMessages = receivedMessages + 1;
}
if (receivedMessages == 2)
{
break;
}
}
}
public static void Main()
{
List<string> messages = new List<string>();
for (int i = 0; i < 1; i++)
{
String msg = "{ \"user\" : \"James\", \"email\" : \"james#domain.pt\" }";
messages.Add(msg);
}
TcpClient client = new TcpClient();
client.Connect("someIp", somePort);
int sentMessages = 0;
int receivedMessages = 0;
foreach (string msg in messages)
{
Thread newThread = new Thread(() =>
{
sentMessages = sentMessages + 1;
Console.WriteLine("SENT MESSAGES: " + sentMessages);
SendMessage(client, msg);
receivedMessages = receivedMessages + 1;
Console.WriteLine("RECEIVED MESSAGES: " + receivedMessages);
});
newThread.Start();
}
Console.ReadLine();
}
}
If I send few emails (up to 10) the network stream is OK.
But if I send thousands of emails I get messed chars lie
:{iyo"asn ooyes" "ncd" 0,"s_d:"4379" nme" 92729,"er_u" ,"ed_t_i" 2#" p cin_d:"921891010-11:11.725,"s" 4663175D0105E6912ADAAFFF6FDA393367" rpy:"rcein"
Why is this?
Don't worry I'm not a spammer :D
When you write a message to a TCP socket, it'll respond with the sent data. When the buffer is full, I expect it's 0, but you advance your send buffer anyway. You should advance it by the return value :)
Edit: it looks like you're using a stream abstraction which writes the internal buffer. The situation is the same. You are saying "the message has been completely sent" when the internal buffer state is not saying this, i.e. position does not equal limit. You need to keep sending until the remaining amount of buffer is 0 before moving on.
I solved this issue by having a single method just to read from the stream like this:
private TcpClient client;
private NetworkStream stream;
public void ListenFromGateway()
{
...
while (true)
{
byte[] bytes = new byte[client.ReceiveBufferSize];
//BLOCKS UNTIL AT LEAST ONE BYTE IS READ
stream.Read(bytes, 0, (int)client.ReceiveBufferSize);
//RETURNS THE DATA RECEIVED
string returndata = Encoding.UTF8.GetString(bytes);
//REMOVE THE EXCEDING CHARACTERS STARTING ON \r
string returndata = returndata.Remove(returndata.IndexOf('\r'));
...
}
Thanks for the help

Resources