How to deal with CoderException: cannot encode a null String with scio - google-cloud-dataflow

I just started using scio and dataflow. Trying my code to one input file, worked fine. But when I add more files to the input, got the following exception:
java.lang.RuntimeException: org.apache.beam.sdk.coders.CoderException: cannot encode a null String
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:280)
at org.apache.beam.runners.core.SimpleDoFnRunner.outputWindowedValue(SimpleDoFnRunner.java:309)
at org.apache.beam.runners.core.SimpleDoFnRunner.access$700(SimpleDoFnRunner.java:77)
at org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:621)
at org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:609)
at com.spotify.scio.util.Functions$$anon$3.processElement(Functions.scala:158)
Caused by: org.apache.beam.sdk.coders.CoderException: cannot encode a null String
at org.apache.beam.sdk.coders.StringUtf8Coder.getEncodedElementByteSize(StringUtf8Coder.java:136)
at org.apache.beam.sdk.coders.StringUtf8Coder.getEncodedElementByteSize(StringUtf8Coder.java:37)
at org.apache.beam.sdk.coders.Coder.registerByteSizeObserver(Coder.java:291)
at com.spotify.scio.coders.RecordCoder.registerByteSizeObserver(Coder.scala:279)
at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.registerByteSizeObserver(WindowedValue.java:564)
at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.registerByteSizeObserver(WindowedValue.java:480)
at org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$ElementByteSizeObservableCoder.registerByteSizeObserver(IntrinsicMapTaskExecutorFactory.java:399)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputObjectAndByteCounter.update(OutputObjectAndByteCounter.java:125)
at org.apache.beam.runners.dataflow.worker.DataflowOutputCounter.update(DataflowOutputCounter.java:64)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:43)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:272)
at org.apache.beam.runners.core.SimpleDoFnRunner.outputWindowedValue(SimpleDoFnRunner.java:309)
at org.apache.beam.runners.core.SimpleDoFnRunner.access$700(SimpleDoFnRunner.java:77)
at org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:621)
at org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:609)
at com.spotify.scio.util.Functions$$anon$3.processElement(Functions.scala:158)
at com.spotify.scio.util.Functions$$anon$3$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:275)
at org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:240)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:325)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:44)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:49)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:201)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:76)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:394)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:363)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:291)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:135)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:115)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:102)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I guess one of my input files may include some malformed data. But how to bypass the bad data? There is a similar question with Java Beam com.google.cloud.dataflow.sdk.coders.CoderException: cannot encode a null String
So I tried this:
val scText = sc.textFile(input)
scText.setCoder(NullableCoder.of(StringUtf8Coder.of()))
It didn't help. Can someone help me on this? Thanks.

The scio team provided a solution to this problem. Basically add --nullableCoders=true in command line argument.

Related

Unnest the nested PCollection using BeamSQL

Try to use BeamSQL for unnest the nested type of PCollection. Lets assume the PCollection which have the Employees and its details. Here details are in nested collection. So if we use the BeamSQL like "SELECT PCOLLECTION.details FROM PCOLLECTION" then getting nested type of details as array collection in the separate PCollection. However when I want to get specific column from the nested type collection as details, then getting error like unable to find the column name. Tried the BeamSQL like (similar like BigQuery SQL) "SELECT X.address FROM PCOLLECTION, Unnest(details) as X" then getting nullpointer exception. Used 2.12.0 apache beam version.
Appreciate some one please help on this.
Below is the sample data of details nested Value (details has email, phone columns. so per row, 'n' no of list of details. Here it has two list of details):
WARNING: printValue:Row:[[Row:[lourdurajan#gmail.com, 9840618047], Row:[lourdurajan#sanmina.com, 9840618047]]]
Here is the Java stacktrace for second select statement:
SELECT `X`.`email`
FROM `beam`.`PCOLLECTION` AS `PCOLLECTION`,
UNNEST(`PCOLLECTION`.`details`) AS `X`
May 08, 2019 11:23:30 AM org.apache.beam.sdk.extensions.sql.impl.BeamQueryPlanner convertToBeamRel
INFO: SQLPlan>
LogicalProject(email=[$3])
LogicalCorrelate(correlation=[$cor0], joinType=[inner], requiredColumns=[{2}])
BeamIOSourceRel(table=[[beam, PCOLLECTION]])
Uncollect
LogicalProject(details=[$cor0.details_2])
LogicalValues(tuples=[[{ 0 }]])
May 08, 2019 11:23:30 AM org.apache.beam.sdk.extensions.sql.impl.BeamQueryPlanner convertToBeamRel
INFO: BEAMPlan>
BeamCalcRel(expr#0..4=[{inputs}], email=[$t3])
BeamUnnestRel(unnestIndex=[2])
BeamIOSourceRel(table=[[beam, PCOLLECTION]])
[WARNING]
java.lang.NullPointerException
at org.apache.beam.sdk.extensions.sql.impl.utils.CalciteUtils.toSchema(CalciteUtils.java:171)
at org.apache.beam.sdk.extensions.sql.impl.rel.BeamUnnestRel$Transform.expand(BeamUnnestRel.java:93)
at org.apache.beam.sdk.extensions.sql.impl.rel.BeamUnnestRel$Transform.expand(BeamUnnestRel.java:87)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:488)
at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:66)
at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.lambda$buildPCollectionList$0(BeamSqlRelUtils.java:47)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.buildPCollectionList(BeamSqlRelUtils.java:48)
at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:64)
at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:36)
at org.apache.beam.sdk.extensions.sql.SqlTransform.expand(SqlTransform.java:111)
at org.apache.beam.sdk.extensions.sql.SqlTransform.expand(SqlTransform.java:79)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:488)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:370)
at com.sanmina.BeamSQLUnnest.main(BeamSQLUnnest.java:217)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:282)
at java.lang.Thread.run(Thread.java:748)
You can achieve this using BigQueryIO.
String Query ="SELECT `X`.`email`
FROM `beam`.`PCOLLECTION` AS `PCOLLECTION`,
UNNEST(`PCOLLECTION`.`details`) AS `X`"
BigQueryIO.readTableRows().fromQuery(query).usingStandardSql()

Avro "not open" exception when writing generic records using Apache Beam

I am using AvroIO.<MyCustomType>writeCustomTypeToGenericRecords() for writing generic records to GCS inside a streaming data flow job. For the first few minutes all seems to be working fine, however, after around 10 minutes, the job starts throwing the following error:
java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: org.apache.avro.AvroRuntimeException: not open
com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:183)
com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner$1.outputWindowedValue(GroupAlsoByWindowFnRunner.java:102)
org.apache.beam.runners.core.ReduceFnRunner.lambda$onTrigger$1(ReduceFnRunner.java:1057)
org.apache.beam.runners.core.ReduceFnContextFactory$OnTriggerContextImpl.output(ReduceFnContextFactory.java:438)
org.apache.beam.runners.core.SystemReduceFn.onTrigger(SystemReduceFn.java:125)
org.apache.beam.runners.core.ReduceFnRunner.onTrigger(ReduceFnRunner.java:1060)
org.apache.beam.runners.core.ReduceFnRunner.onTimers(ReduceFnRunner.java:768)
com.google.cloud.dataflow.worker.StreamingGroupAlsoByWindowViaWindowSetFn.processElement(StreamingGroupAlsoByWindowViaWindowSetFn.java:95)
com.google.cloud.dataflow.worker.StreamingGroupAlsoByWindowViaWindowSetFn.processElement(StreamingGroupAlsoByWindowViaWindowSetFn.java:42)
com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.invokeProcessElement(GroupAlsoByWindowFnRunner.java:115)
com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.processElement(GroupAlsoByWindowFnRunner.java:73)
org.apache.beam.runners.core.LateDataDroppingDoFnRunner.processElement(LateDataDroppingDoFnRunner.java:80)
com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn.processElement(GroupAlsoByWindowsParDoFn.java:133)
com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:200)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1227)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:136)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:966)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: org.apache.avro.AvroRuntimeException: not open
org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:34)
org.apache.beam.sdk.io.WriteFiles$WriteShardsIntoTempFilesFn$DoFnInvoker.invokeProcessElement(Unknown Source)
org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:275)
org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:237)
com.google.cloud.dataflow.worker.StreamingSideInputDoFnRunner.processElement(StreamingSideInputDoFnRunner.java:72)
com.google.cloud.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:324)
com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:181)
com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner$1.outputWindowedValue(GroupAlsoByWindowFnRunner.java:102)
org.apache.beam.runners.core.ReduceFnRunner.lambda$onTrigger$1(ReduceFnRunner.java:1057)
org.apache.beam.runners.core.ReduceFnContextFactory$OnTriggerContextImpl.output(ReduceFnContextFactory.java:438)
org.apache.beam.runners.core.SystemReduceFn.onTrigger(SystemReduceFn.java:125)
org.apache.beam.runners.core.ReduceFnRunner.onTrigger(ReduceFnRunner.java:1060)
org.apache.beam.runners.core.ReduceFnRunner.onTimers(ReduceFnRunner.java:768)
com.google.cloud.dataflow.worker.StreamingGroupAlsoByWindowViaWindowSetFn.processElement(StreamingGroupAlsoByWindowViaWindowSetFn.java:95)
com.google.cloud.dataflow.worker.StreamingGroupAlsoByWindowViaWindowSetFn.processElement(StreamingGroupAlsoByWindowViaWindowSetFn.java:42)
com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.invokeProcessElement(GroupAlsoByWindowFnRunner.java:115)
com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.processElement(GroupAlsoByWindowFnRunner.java:73)
org.apache.beam.runners.core.LateDataDroppingDoFnRunner.processElement(LateDataDroppingDoFnRunner.java:80)
com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn.processElement(GroupAlsoByWindowsParDoFn.java:133)
com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:200)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1227)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:136)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:966)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.avro.AvroRuntimeException: not open
org.apache.avro.file.DataFileWriter.assertOpen(DataFileWriter.java:82)
org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:299)
org.apache.beam.sdk.io.AvroSink$AvroWriter.write(AvroSink.java:123)
org.apache.beam.sdk.io.WriteFiles.writeOrClose(WriteFiles.java:550)
org.apache.beam.sdk.io.WriteFiles.access$1000(WriteFiles.java:112)
org.apache.beam.sdk.io.WriteFiles$WriteShardsIntoTempFilesFn.processElement(WriteFiles.java:718)
The data flow job continues to run fine though. Just to give some background about the streaming job: it pulls messages from Pub/Sub, creates a fixed window of 5 minutes with a trigger of 10,000 messages ( whichever comes first), processes the messages and finally writes to a GCP bucket whereby each specific type of message goes to a specific folder based on the type of the message using .to(new AvroEventDynamicDestinations(avroBaseDir, schemaView)).
UPDATE 1: Looking at the timestamp of this error, it seems to be coming up exactly with an interval of 10 seconds, so 6 per minute.
I had exactly same exception. My problem came from wrong schema, null schema to be exact (not found by schema registry)

Sonarqube 6.7 - Fail to read ISSUES.LOCATIONS, com.google.protobuf.InvalidProtocolBufferException

Having the below issue after upgrading to SonarQube 6.7. I'm using docker image for sonarqube, I just ran the migrations scripts just like suggested by Sonarquibe UI.
Can someone suggest ?
Thank you
2018.05.15 06:58:42 INFO ce[AWNimASHSOFQrk0D-LC8][o.s.c.t.CeWorkerImpl] Execute task | project=myproject:develop | type=REPORT | id=AWNimASHSOFQrk0D-LC8 | submitter=jenkins
2018.05.15 06:59:52 ERROR ce[AWNimASHSOFQrk0D-LC8][o.s.c.t.CeWorkerImpl] Failed to execute task AWNimASHSOFQrk0D-LC8
org.sonar.server.computation.task.projectanalysis.component.VisitException: Visit of Component {key=myproject:develop,type=PROJECT} failed
at org.sonar.server.computation.task.projectanalysis.component.VisitException.rethrowOrWrap(VisitException.java:44)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visit(VisitorsCrawler.java:74)
at org.sonar.server.computation.task.projectanalysis.step.ExecuteVisitorsStep.execute(ExecuteVisitorsStep.java:51)
at org.sonar.server.computation.task.step.ComputationStepExecutor.executeSteps(ComputationStepExecutor.java:64)
at org.sonar.server.computation.task.step.ComputationStepExecutor.execute(ComputationStepExecutor.java:52)
at org.sonar.server.computation.task.projectanalysis.taskprocessor.ReportTaskProcessor.process(ReportTaskProcessor.java:73)
at org.sonar.ce.taskprocessor.CeWorkerImpl.executeTask(CeWorkerImpl.java:134)
at org.sonar.ce.taskprocessor.CeWorkerImpl.findAndProcessTask(CeWorkerImpl.java:97)
at org.sonar.ce.taskprocessor.CeWorkerImpl.withCustomizedThreadName(CeWorkerImpl.java:81)
at org.sonar.ce.taskprocessor.CeWorkerImpl.call(CeWorkerImpl.java:73)
at org.sonar.ce.taskprocessor.CeWorkerImpl.call(CeWorkerImpl.java:43)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.ibatis.exceptions.PersistenceException:
Error querying database. Cause: java.lang.IllegalStateException: Fail to read ISSUES.LOCATIONS [KEE=AV9aIkjMZ-jmCeoEg-IR]
The error may exist in org.sonar.db.issue.IssueMapper
The error may involve org.sonar.db.issue.IssueMapper.scrollNonClosedByComponentUuid
The error occurred while handling results
SQL: select i.id, i.kee as kee, i.rule_id as ruleId, i.severity as severity, i.manual_severity as manualSeverity, i.message as message, i.line as line, i.locations as locations, i.gap as gap, i.effort as effort, i.status as status, i.resolution as resolution, i.checksum as checksum, i.assignee as assignee, i.author_login as authorLogin, i.tags as tagsString, i.issue_attributes as issueAttributes, i.issue_creation_date as issueCreationTime, i.issue_update_date as issueUpdateTime, i.issue_close_date as issueCloseTime, i.created_at as createdAt, i.updated_at as updatedAt, r.plugin_rule_key as ruleKey, r.plugin_name as ruleRepo, r.language as language, p.kee as componentKey, i.component_uuid as componentUuid, p.module_uuid as moduleUuid, p.module_uuid_path as moduleUuidPath, p.path as filePath, root.kee as projectKey, i.project_uuid as projectUuid, i.issue_type as type from issues i inner join rules r on r.id=i.rule_id inner join projects p on p.uuid=i.component_uuid inner join projects root on root.uuid=i.project_uuid where i.component_uuid = ? and i.status <> 'CLOSED'
Cause: java.lang.IllegalStateException: Fail to read ISSUES.LOCATIONS [KEE=AV9aIkjMZ-jmCeoEg-IR]
at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:30)
at org.apache.ibatis.session.defaults.DefaultSqlSession.select(DefaultSqlSession.java:172)
at org.apache.ibatis.session.defaults.DefaultSqlSession.select(DefaultSqlSession.java:158)
at org.apache.ibatis.binding.MapperMethod.executeWithResultHandler(MapperMethod.java:126)
at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:72)
at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:59)
at com.sun.proxy.$Proxy42.scrollNonClosedByComponentUuid(Unknown Source)
at org.sonar.server.computation.task.projectanalysis.issue.ComponentIssuesLoader.loadForComponentUuid(ComponentIssuesLoader.java:73)
at org.sonar.server.computation.task.projectanalysis.issue.ComponentIssuesLoader.loadForComponentUuid(ComponentIssuesLoader.java:51)
at org.sonar.server.computation.task.projectanalysis.issue.CloseIssuesOnRemovedComponentsVisitor.closeIssuesForDeletedComponentUuids(CloseIssuesOnRemovedComponentsVisitor.java:60)
at org.sonar.server.computation.task.projectanalysis.issue.CloseIssuesOnRemovedComponentsVisitor.visitProject(CloseIssuesOnRemovedComponentsVisitor.java:53)
at org.sonar.server.computation.task.projectanalysis.component.TypeAwareVisitorWrapper.visitProject(TypeAwareVisitorWrapper.java:47)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visitNode(VisitorsCrawler.java:120)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visitImpl(VisitorsCrawler.java:100)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visit(VisitorsCrawler.java:72)
... 17 common frames omitted
Caused by: java.lang.IllegalStateException: Fail to read ISSUES.LOCATIONS [KEE=AV9aIkjMZ-jmCeoEg-IR]
at org.sonar.db.issue.IssueDto.parseLocations(IssueDto.java:652)
at org.sonar.db.issue.IssueDto.toDefaultIssue(IssueDto.java:721)
at org.sonar.server.computation.task.projectanalysis.issue.ComponentIssuesLoader.lambda$loadForComponentUuid$1(ComponentIssuesLoader.java:74)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.callResultHandler(DefaultResultSetHandler.java:363)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.storeObject(DefaultResultSetHandler.java:356)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleRowValuesForSimpleResultMap(DefaultResultSetHandler.java:348)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleRowValues(DefaultResultSetHandler.java:322)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleResultSet(DefaultResultSetHandler.java:298)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleResultSets(DefaultResultSetHandler.java:192)
at org.apache.ibatis.executor.statement.PreparedStatementHandler.query(PreparedStatementHandler.java:64)
at org.apache.ibatis.executor.statement.RoutingStatementHandler.query(RoutingStatementHandler.java:79)
at org.apache.ibatis.executor.ReuseExecutor.doQuery(ReuseExecutor.java:60)
at org.apache.ibatis.executor.BaseExecutor.queryFromDatabase(BaseExecutor.java:324)
at org.apache.ibatis.executor.BaseExecutor.query(BaseExecutor.java:156)
at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:109)
at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:83)
at org.apache.ibatis.session.defaults.DefaultSqlSession.select(DefaultSqlSession.java:170)
... 30 common frames omitted
Caused by: com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol message, the input ended unexpectedly in the middle of a field. This could mean either that the input has been truncated or that an embedded message misreported its own length.
at com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:70)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:1068)
at com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:1135)
at com.google.protobuf.CodedInputStream.readRawVarint64SlowPath(CodedInputStream.java:778)
at com.google.protobuf.CodedInputStream.readRawVarint32(CodedInputStream.java:637)
at com.google.protobuf.CodedInputStream.readInt32(CodedInputStream.java:348)
at org.sonar.db.protobuf.DbCommons$TextRange.<init>(DbCommons.java:149)
at org.sonar.db.protobuf.DbCommons$TextRange.<init>(DbCommons.java:90)
at org.sonar.db.protobuf.DbCommons$TextRange$1.parsePartialFrom(DbCommons.java:750)
at org.sonar.db.protobuf.DbCommons$TextRange$1.parsePartialFrom(DbCommons.java:744)
at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:495)
at org.sonar.db.protobuf.DbIssues$Locations.<init>(DbIssues.java:99)
at org.sonar.db.protobuf.DbIssues$Locations.<init>(DbIssues.java:55)
at org.sonar.db.protobuf.DbIssues$Locations$1.parsePartialFrom(DbIssues.java:852)
at org.sonar.db.protobuf.DbIssues$Locations$1.parsePartialFrom(DbIssues.java:846)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:137)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:169)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:180)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:185)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
at org.sonar.db.protobuf.DbIssues$Locations.parseFrom(DbIssues.java:253)
at org.sonar.db.issue.IssueDto.parseLocations(IssueDto.java:650)
... 46 common frames omitted
2018.05.15 06:59:52 ERROR ce[AWNimASHSOFQrk0D-LC8][o.s.c.t.CeWorkerImpl] Executed task | project=myproject:develop | type=REPORT | id=AWNimASHSOFQrk0D-LC8 | submitter=jenkins | time=70255ms
Actually I solved my problem executing this query on sonar schema. I guess that was definitely related to version upgrade... not a clean solution but still ok to me as I didn't need to have historical data
delete from issues where STATUS != "CLOSED"

GraphLoader is not working

I am using dse-5-0-5 and graphloader to load data from GraphML. While giving command:
graphloader ./scripts/graphml2Vertex/recipeMappingGraphML.groovy -graph testGraphML -address 172.31.35.238 -load_failure_log /home/centos/DSE/dse-graph-loader-5.0.5/scripts/graphml2Vertex/loadfailure.log -dryrun true
I am getting error:
groovy.lang.MissingMethodException: No signature of method: com.datastax.dsegraphloader.api.GraphSource$VerticesBuilder.withVertexId() is applicable for argument types: () values: []
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:58)
at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:49)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117)
at Script1.run(Script1.groovy:19)
at com.datastax.dsegraphloader.cli.GroovyScriptExecutor.evaluate(GroovyScriptExecutor.java:106)
at com.datastax.dsegraphloader.cli.Executable.execute(Executable.java:72)
at com.datastax.dsegraphloader.cli.Executable.main(Executable.java:171)
I have imported the data as graphML from orientdb (by using g.saveGraphML(filename.xml) funtion ) and now trying to include the same data in DSE graph using graphloader (import graphMl) .Can you please tell the cause of this kind of error?
-Varun
I just discovered yesterday that the withVertexId() needs to be removed in order for graphloader to run the Mapping Script. I'll be changing the documentation.

Unable to import neo4j database with blueprints

i'm trying to open a neo4j database by using blueprints implementation, but i got the following exceptions:
Neo4jGraph graph = new Neo4jGraph("/Users/pipe/Dev/neo4j-community-2.1.0-M01/data/graph.db");
this cause
Caused by: javax.faces.el.EvaluationException: java.lang.RuntimeException: Bad value '-192M' for setting 'neostore.propertystore.db.strings.mapped_memory': value does not match expression:\d+[kmgKMG]?
at javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:102)
at com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:101)
... 32 more
Caused by: java.lang.RuntimeException: Bad value '-192M' for setting 'neostore.propertystore.db.strings.mapped_memory': value does not match expression:\d+[kmgKMG]?
at com.tinkerpop.blueprints.impls.neo4j.Neo4jGraph.<init>(Neo4jGraph.java:165)
at com.tinkerpop.blueprints.impls.neo4j.Neo4jGraph.<init>(Neo4jGraph.java:135)
at org.pipe.java.web.netnografica.persistenza.graphdb.DAONodo.toGraphml(DAONodo.java:204)
at org.pipe.java.web.netnografica.controllo.ControlloGenerale.esportaGraphml(ControlloGenerale.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.el.parser.AstValue.invoke(AstValue.java:278)
at org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:274)
at com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:105)
at javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:88)
... 33 more
Caused by: java.lang.IllegalArgumentException: Bad value '-192M' for setting 'neostore.propertystore.db.strings.mapped_memory': value does not match expression:\d+[kmgKMG]?
at org.neo4j.helpers.Settings$DefaultSetting.apply(Settings.java:782)
at org.neo4j.helpers.Settings$DefaultSetting.apply(Settings.java:702)
at org.neo4j.graphdb.factory.GraphDatabaseSetting$SettingWrapper.apply(GraphDatabaseSetting.java:215)
at org.neo4j.graphdb.factory.GraphDatabaseSetting$SettingWrapper.apply(GraphDatabaseSetting.java:189)
at org.neo4j.kernel.configuration.ConfigurationValidator.validate(ConfigurationValidator.java:50)
at org.neo4j.kernel.configuration.Config.applyChanges(Config.java:121)
at org.neo4j.kernel.InternalAbstractGraphDatabase.create(InternalAbstractGraphDatabase.java:339)
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:253)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:106)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:81)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:63)
at com.tinkerpop.blueprints.impls.neo4j.Neo4jGraph.<init>(Neo4jGraph.java:155)
... 44 more
there seems to be a need to provide a properties file. Is correct?
*Edited to answer to Michael Hunger:
Well .. I changed the version of blueprints, now is 2.5.0-SNAPSHOT, but nothing changed. So i provided the configs using the map ask asked by the constructor
Map<String, String> configurazione = new HashMap<String, String>();
configurazione.put("neostore.propertystore.db.strings.mapped_memory", "250M");
configurazione.put("neostore.propertystore.db.arrays.mapped_memory", "100M");
configurazione.put("neostore.relationshipstore.db.mapped_memory", "3845M");
configurazione.put("neostore.nodestore.db.mapped_memory", "350M");
configurazione.put("neostore.propertystore.db.mapped_memory", "350M");
configurazione.put("neostore.nodestore.db.mapped_memory", "769M");
Neo4j2Graph grafo = new Neo4j2Graph("/Users/pipe/Dev/neo4j-community-2.1.0-M01/data/graph.db", configurazione);
Now the exception is changed, and i really don't know what is wrong.. I linked in paste bin to report the complete stack.
http://pastebin.com/XpipSysp
at last a NoSuchMethodError is thrown. What am I missing?
Thanks a lot.
Which blueprints version are you using?
Blueprints 2.5-SNAPSHOT is compatible with Neo4j 2.0.0.
Please note there is a separate module for Neo4j 2.0 called blueprints-neo4j2
And the classes are called Neo4j2Graph Neo4j2Vertex etc.
You should also be able to provide config to the Neo4j2Graph.

Resources