TypeError: JSON.stringify cannot serialize cyclic structures - ios

i am using camera plugin in my Ionic 2 app. It works fine in android. But it was throwing an error in IOS. i am getting this error after takes a picture using camera plugin. i am converting dataURI to a Blob: Could anyone suggest me where i am doing wrong?
This is an error: Xcode
/www/build/polyfills.js:2:30128 ERROR: error JSON.stringify()ing
argument: TypeError: JSON.stringify cannot serialize cyclic
structures.
This one in console:
Uncaught (in promise): Error: InvalidCharacterError: DOM Exception 5
atob#[native code] dataURItoBlob
This is the code which convert dataURI to a Blob:
function dataURItoBlob(dataURI) {
// convert base64/URLEncoded data component to raw binary data held in a string
var byteString;
if (dataURI.split(',')[0].indexOf('base64') >= 0)
byteString = atob(dataURI.split(',')[1]);
else
byteString = unescape(dataURI.split(',')[1]);
// separate out the mime component
var mimeString = dataURI.split(',')[0].split(':')[1].split(';')[0];
// write the bytes of the string to a typed array
var ia = new Uint8Array(byteString.length);
for (var i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i);
}
return new Blob([ia], {type:mimeString});
}

Related

Using C# send Avro message to Azure Event Hub and then de-serialize using Scala Structured Streaming in Databricks 7.2/ Scala 3.0

So I have been banging my head against this for the last couple of days. I am having trouble de-serializing an Avro file that we are generating and sending into Azure Event Hub. We are attempting to do this with Databricks Runtime 7.2 Structured Streaming. Using the newer from_avro method described here to de-serialize the body of the event message.
import org.apache.spark.eventhubs._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.avro._
import org.apache.avro._
import org.apache.spark.sql.types._
import org.apache.spark.sql.avro.functions._
val connStr = "<EventHubConnectionstring>"
val customEventhubParameters =
EventHubsConf(connStr.toString())
.setMaxEventsPerTrigger(5)
//.setStartingPosition(EventPosition.fromStartOfStream)
val incomingStream = spark
.readStream
.format("eventhubs")
.options(customEventhubParameters.toMap)
.load()
.filter($"properties".getItem("TableName") === "Branches")
val avroSchema = s"""{"type":"record","name":"Branches","fields":[{"name":"_src_ChangeOperation","type":["null","string"]},{"name":"_src_CurrentTrackingId","type":["null","long"]},{"name":"_src_RecordExtractUTCTimestamp","type":"string"},{"name":"ID","type":["null","int"]},{"name":"BranchCode","type":["null","string"]},{"name":"BranchName","type":["null","string"]},{"name":"Address1","type":["null","string"]},{"name":"Address2","type":["null","string"]},{"name":"City","type":["null","string"]},{"name":"StateID","type":["null","int"]},{"name":"ZipCode","type":["null","string"]},{"name":"Telephone","type":["null","string"]},{"name":"Contact","type":["null","string"]},{"name":"Title","type":["null","string"]},{"name":"DOB","type":["null","string"]},{"name":"TimeZoneID","type":["null","int"]},{"name":"ObserveDaylightSaving","type":["null","boolean"]},{"name":"PaySummerTimeHour","type":["null","boolean"]},{"name":"PayWinterTimeHour","type":["null","boolean"]},{"name":"BillSummerTimeHour","type":["null","boolean"]},{"name":"BillWinterTimeHour","type":["null","boolean"]},{"name":"Deleted","type":["null","boolean"]},{"name":"LastUpdated","type":["null","string"]},{"name":"txJobID","type":["null","string"]},{"name":"SourceID","type":["null","string"]},{"name":"HP_UseHolPayHourMethod","type":["null","boolean"]},{"name":"HP_HourlyRatePercent","type":["null","float"]},{"name":"HP_RequiredWeeksOfEmployment","type":["null","float"]},{"name":"rgUseSystemSettings","type":["null","boolean"]},{"name":"rgDutySplitBy","type":["null","int"]},{"name":"rgBasePeriodDate","type":["null","string"]},{"name":"rgFirstDayOfWeek","type":["null","int"]},{"name":"rgDutyStartOfDayTime","type":["null","string"]},{"name":"rgHolidayStartOfDayTime","type":["null","string"]},{"name":"rgMinimumTimePeriod","type":["null","int"]},{"name":"rgLoadPublicTable","type":["null","boolean"]},{"name":"rgPOTPayPeriodID","type":["null","int"]},{"name":"rgPOT1","type":["null","string"]},{"name":"rgPOT2","type":["null","string"]},{"name":"Facsimile","type":["null","string"]},{"name":"CountryID","type":["null","int"]},{"name":"EmailAddress","type":["null","string"]},{"name":"ContractSecurityHistoricalWeeks","type":["null","int"]},{"name":"ContractSecurityFutureWeeks","type":["null","int"]},{"name":"TimeLinkTelephone1","type":["null","string"]},{"name":"TimeLinkTelephone2","type":["null","string"]},{"name":"TimeLinkTelephone3","type":["null","string"]},{"name":"TimeLinkTelephone4","type":["null","string"]},{"name":"TimeLinkTelephone5","type":["null","string"]},{"name":"AutoTakeMissedCalls","type":["null","boolean"]},{"name":"AutoTakeMissedCallsDuration","type":["null","string"]},{"name":"AutoTakeApplyDurationToCheckCalls","type":["null","boolean"]},{"name":"AutoTakeMissedCheckCalls","type":["null","boolean"]},{"name":"AutoTakeMissedCheckCallsDuration","type":["null","string"]},{"name":"DocumentLocation","type":["null","string"]},{"name":"DefaultPortalAccess","type":["null","boolean"]},{"name":"DefaultPortalSecurityRoleID","type":["null","int"]},{"name":"EmployeeTemplateID","type":["null","int"]},{"name":"SiteCardTemplateID","type":["null","int"]},{"name":"TSAllowancesHeaderID","type":["null","int"]},{"name":"TSMinimumWageHeaderID","type":["null","int"]},{"name":"TimeLinkClaimMade","type":["null","boolean"]},{"name":"TSAllowancePeriodBaseDate","type":["null","string"]},{"name":"TSAllowancePeriodID","type":["null","int"]},{"name":"TSMinimumWageCalcMethodID","type":["null","int"]},{"name":"FlexibleShiftsHeaderID","type":["null","int"]},{"name":"SchedulingUseSystemSettings","type":["null","boolean"]},{"name":"MinimumRestPeriod","type":["null","int"]},{"name":"TSMealBreakHeaderID","type":["null","int"]},{"name":"ServiceTracImportType","type":["null","int"]},{"name":"StandDownDiaryEventID","type":["null","int"]},{"name":"ScheduledDutyChangeMessageTemplateId","type":["null","int"]},{"name":"ScheduledDutyAddedMessageTemplateId","type":["null","int"]},{"name":"ScheduledDutyRemovedMessageTemplateId","type":["null","int"]},{"name":"NegativeMessageResponsesPermitted","type":["null","boolean"]},{"name":"PortalEventsStandardLocFirst","type":["null","boolean"]},{"name":"ReminderMessage","type":["null","boolean"]},{"name":"ReminderMessageDaysBefore","type":["null","int"]},{"name":"ReminderMessageTemplateId","type":["null","int"]},{"name":"ScheduledDutyChangeMessageAllowReply","type":["null","boolean"]},{"name":"ScheduledDutyAddedMessageAllowReply","type":["null","boolean"]},{"name":"PayAlertEscalationGroup","type":["null","int"]},{"name":"BudgetedPay","type":["null","int"]},{"name":"PayAlertVariance","type":["null","string"]},{"name":"BusinessUnitID","type":["null","int"]},{"name":"APH_Hours","type":["null","float"]},{"name":"APH_Period","type":["null","int"]},{"name":"APH_PeriodCount","type":["null","int"]},{"name":"AveragePeriodHoursRuleId","type":["null","int"]},{"name":"HolidayScheduleID","type":["null","int"]},{"name":"AutomationRuleProfileId","type":["null","int"]}]}"""
val decoded_df = incomingStream
.select(
from_avro($"body",avroSchema).alias("payload")
)
val query1 = (
decoded_df
.writeStream
.format("memory")
.queryName("read_hub")
.start()
)
I have verified that the file we are sending has a valid schema, that it has data in it and that it is getting to the stream job in the notebook before failing with the following stack trace that states that the data is malformed. However I am able to write the generated file to a .avro file and de-serialize it using the normal .read.format("avro") method just fine.
at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:413)
at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2$(WriteToDataSourceV2Exec.scala:361)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.writeWithV2(WriteToDataSourceV2Exec.scala:322)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.run(WriteToDataSourceV2Exec.scala:329)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:39)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:39)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:45)
at org.apache.spark.sql.execution.collect.Collector$.callExecuteCollect(Collector.scala:118)
at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:69)
at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:88)
at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:508)
at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:480)
at org.apache.spark.sql.execution.SparkPlan.executeCollectResult(SparkPlan.scala:396)
at org.apache.spark.sql.Dataset.collectResult(Dataset.scala:2986)
at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3692)
at org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:2953)
at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3684)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$5(SQLExecution.scala:116)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:248)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:101)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:835)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:77)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:198)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3682)
at org.apache.spark.sql.Dataset.collect(Dataset.scala:2953)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$16(MicroBatchExecution.scala:586)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$5(SQLExecution.scala:116)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:248)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:101)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:835)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:77)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:198)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$15(MicroBatchExecution.scala:581)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:276)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:274)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:71)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:581)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:231)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:276)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:274)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:71)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:199)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:193)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:346)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:259)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 37.0 failed 4 times, most recent failure: Lost task 0.3 in stage 37.0 (TID 84, 10.139.64.5, executor 0): org.apache.spark.SparkException: Malformed records are detected in record parsing. Current parse Mode: FAILFAST. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE'.
at org.apache.spark.sql.avro.AvroDataToCatalyst.nullSafeEval(AvroDataToCatalyst.scala:111)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:731)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$7(WriteToDataSourceV2Exec.scala:438)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1615)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:477)
at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:385)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:144)
at org.apache.spark.scheduler.Task.run(Task.scala:117)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$9(Executor.scala:657)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1581)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:660)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArrayIndexOutOfBoundsException: -40
at org.apache.avro.io.parsing.Symbol$Alternative.getSymbol(Symbol.java:424)
at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:290)
at org.apache.avro.io.parsing.Parser.advance(Parser.java:88)
at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:267)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:179)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:232)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:222)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:175)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:145)
at org.apache.spark.sql.avro.AvroDataToCatalyst.nullSafeEval(AvroDataToCatalyst.scala:100)
... 16 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2478)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2427)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2426)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2426)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1131)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1131)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1131)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2678)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2625)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2613)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:917)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2313)
at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:382)
... 46 more
Caused by: org.apache.spark.SparkException: Malformed records are detected in record parsing. Current parse Mode: FAILFAST. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE'.
at org.apache.spark.sql.avro.AvroDataToCatalyst.nullSafeEval(AvroDataToCatalyst.scala:111)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:731)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$7(WriteToDataSourceV2Exec.scala:438)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1615)
at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:477)
at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:385)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:144)
at org.apache.spark.scheduler.Task.run(Task.scala:117)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$9(Executor.scala:657)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1581)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:660)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArrayIndexOutOfBoundsException: -40
at org.apache.avro.io.parsing.Symbol$Alternative.getSymbol(Symbol.java:424)
at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:290)
at org.apache.avro.io.parsing.Parser.advance(Parser.java:88)
at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:267)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:179)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:232)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:222)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:175)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:145)
at org.apache.spark.sql.avro.AvroDataToCatalyst.nullSafeEval(AvroDataToCatalyst.scala:100)
... 16 more
Tech
C# Azure Function v3 .net core generating Avro file using Avro 1.8.2
Avro file is serialized to byte array using Generic Writer not Specific Writer and sent to Azure Event Hub
Databricks Runtime 7.2/Scala 3.0
Databricks notebooks written in Scala
Databricks Structured Stream Notebook to de-serialize the Avro message
and send to delta lake table
NOT using the following
Event Hub Capture
Kafka
Schema registry
Ok so I just figured out what the issue was. It was in how we were generating the avro message before sending it to event hub. In our serialization method we were using the var writer = new GenericDatumWriter<GenericRecord>(schema); and IFileWriter<GenericRecord> to write to a memory stream and then just getting the byte array of that stream as seen below.
public byte[] Serialize(DataCapture data)
{
var schema = GenerateSchema(data.Schema);
var writer = new GenericDatumWriter<GenericRecord>(schema);
using(var ms = new MemoryStream())
{
using (IFileWriter<GenericRecord> fileWriter = DataFileWriter<GenericRecord>.OpenWriter(writer, ms))
{
foreach (var jsonString in data.Rows)
{
var record = new GenericRecord(schema);
var obj = JsonConvert.DeserializeObject<JObject>(jsonString);
foreach (var column in data.Schema.Columns)
{
switch (MapDataType(column.DataTypeName))
{
case AvroTypeEnum.Boolean:
record.Add(column.ColumnName, obj.GetValue(column.ColumnName).Value<bool?>());
break;
//Map all datatypes ect....removed to shorten example
default:
record.Add(column.ColumnName, obj.GetValue(column.ColumnName).Value<string>());
break;
}
}
fileWriter.Append(record);
}
}
return ms.ToArray();
}
}
When what we actually should do is use var writer = new DefaultWriter(schema); and var encoder = new BinaryEncoder(ms); to then write the records with writer.Write(record, encoder); before returning the byte array of the stream.
public byte[] Serialize(DataCapture data)
{
var schema = GenerateSchema(data.Schema);
var writer = new DefaultWriter(schema);
using (var ms = new MemoryStream())
{
var encoder = new BinaryEncoder(ms);
foreach (var jsonString in data.Rows)
{
var record = new GenericRecord(schema);
var obj = JsonConvert.DeserializeObject<JObject>(jsonString);
foreach (var column in data.Schema.Columns)
{
switch (MapDataType(column.DataTypeName))
{
case AvroTypeEnum.Boolean:
record.Add(column.ColumnName, obj.GetValue(column.ColumnName).Value<bool?>());
break;
//Map all datatypes ect....removed to shorten example
default:
record.Add(column.ColumnName, obj.GetValue(column.ColumnName).Value<string>());
break;
}
}
writer.Write(record, encoder);
}
return ms.ToArray();
}
}
So lesson learned is that not all Avro memory streams converted to byte[] are the same. The from_avro method will only de-serialize avro data the has been binary encoded with the BinaryEncoder class not data created with the IFileWriter. If there is something that I should be doing instead please let me know but this fixed my issue. Hopefully my pain will spare others the same.

JSON Parse error: Unrecognized token '<' in titanium

I'm receiving the error JSON Parse error: Unrecognized token '<'. but only on IOS. in android it is working fine and the JSON seams to be right. (You can put the link in your web browser and se). The error is in this line Data = JSON.parse(this.responseText); but i can't understand why. and why does is work on android and not in IOS?
var client = Ti.Network.createHTTPClient({
onload : function(e) {
Data = JSON.parse(this.responseText);
Size = Object.keys(Data).length;
AddList();
},
onerror : function(e) {
},
timeout : 15000
});
client.open("GET", http://lamadeus.virtualweb.pt/site/app_mobile/teste.php?act=getprodsdestaque);
client.send();
Have you tried printing the responseText on iOS? Usually this error indicates that the request is receiving a HTML instead of JSON.
Check after replacing :
< with <
> with >
& with & in your this.responseText and after that try to parse.

error while converting from nt to rdf/xml format in Jena

What is the meaning of the following error message:
I am attempting to convert the dogfood.nt to its rdf/xml representation form, what does the StackOverflow message indicate ?
<j.12:Person rdf:about="http://data.semanticweb.org/person/rich-keller">
<j.12:name>Rich Keller</j.12:name>
<rdfs:label>Rich Keller</rdfs:label>
<j.3:affiliation rdf:resource="http://data.semanticweb.org/organization/nasa-ames-research-center"/>
<j.4:holdsRole rdf:resource="http://data.semanticweb.org/conference/iswc/2005/pc-member-at-iswc2005-research-track"/>
</j.12:PersException in thread "main" java.lang.StackOverflowError
at java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)
at java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)
at java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)
at java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)
at java.util.regex.Pattern$Curly.match0(Pattern.java:4272)
at java.util.regex.Pattern$Curly.match(Pattern.java:4234)
at java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)
at java.util.regex.Pattern$Branch.match(Pattern.java:4604)
at java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)
at java.util.regex.Pattern$Branch.match(Pattern.java:4604)
at java.util.regex.Pattern$Branch.match(Pattern.java:4602)
Following is the code snippet used:
Model model11 = ModelFactory.createDefaultModel();
InputStream is1 = FileManager.get().open("dogfood4.nt");
if (is1 != null) {
model11.read(is1, null, "N-TRIPLE");
model11.write(os1, "RDF/XML");
} else {
System.err.println("cannot read file ");;
}
I am using the semantic dogfood n-triples.

Highcharts-Export Server for Java JSON parsing issue

I configured the highchart export server for Java from the current master in GitHub. When I tried to export an image using the demo page a corrupted file was returned. I have debugged to code and found out the following message is returned as the result in validate method in ServerObjectFactory.java
String result = server.request("{\"status\":\"isok\"}");
The value of the result String is "Failed rendering:SyntaxError: Unable to parse JSON string"
What can be the cause for this issue.
Thank You
That's very strange. The syntax of the JSON string is syntactically correct. I can point out where from the error is generated.
In the highcharts-convert.js file in highcharts-ecport-convert/src/main/resources/phantomjs the incoming request is expected to be a JSON string and is parsed, see Line 469
function (request, response) {
var jsonStr = request.post,
params,
msg;
try {
params = JSON.parse(jsonStr); // L469
if (params.status) {
// for server health validation
response.statusCode = 200;
response.write('OK');
response.close();
} else {
....
Set in a console.log(jsonStr) before JSON.parse(jsonStr) . I hope this give you a clue of why it's throwing a error.

The youtube API sometimes throws error: Call to a member function children() on a non-object

When i launch the php script, sometime works fine, but many other times it retrieve me this errror
Fatal error: Call to a member function children() on a non-object in
/membri/americanhorizon/ytvideo/rilevametadatadaurlyoutube.php on line
21
This is the first part of the code
// set feed URL
$feedURL = 'http://gdata.youtube.com/feeds/api/videos/dZec2Lbr_r8';
// read feed into SimpleXML object
$entry = simplexml_load_file($feedURL);
$video = parseVideoEntry($entry);
function parseVideoEntry($entry) {
$obj= new stdClass;
// get nodes in media: namespace for media information
$media = $entry->children('http://search.yahoo.com/mrss/'); //<----this is the doomed line 21
UPDATE: solution adopted
for ($i=0 ; $i< count($fileArray); $i++)
{
// set feed URL
$feedURL = 'http://gdata.youtube.com/feeds/api/videos/'.$fileArray[$i];
// read feed into SimpleXML object
$entry = simplexml_load_file($feedURL);
if (is_object($entry))
{
$video = parseVideoEntry($entry);
echo ($video->description."|".$video->length);
echo "<br>";
}
else
{
$i--;
}
}
In this mode i force the script to re-check the file that caused the error
You are first of all calling a function:
$entry = simplexml_load_file($feedURL);
That function has a return value. You find it documented on the manual page of that function:
http://php.net/simplexml_load_file
Then you use that return value in form of a variable $entry without verifying that the function call was successful.
Because of that, you run into an error next. However your error/mistake is how you treat the return value of the function.
Not dealing with return values properly is like calling for trouble. Read about the function you use, check the return value(s) and proceed according to success or error conditions.
$entry = simplexml_load_file($feedURL);
if (FALSE === $entry)
{
// youtube not available.
}
else
{
// that's what I love!
}
Sometimes? Really?
Take a look at this:
<?php
$dummy; //IN FACT, this var is NULL now
// Will throw exactly the same error you get
$dummy->children();
Why? Because, we can call method from an object type.
So, if you wanna avoid errors like this one, next time you would call the method ensure that it's "possible".
<?php
if ( is_object($dummy) && method_exists($dummy, 'children') ){
//sure it works
$dummy->children();
}

Resources