Avro "not open" exception when writing generic records using Apache Beam - avro

I am using AvroIO.<MyCustomType>writeCustomTypeToGenericRecords() for writing generic records to GCS inside a streaming data flow job. For the first few minutes all seems to be working fine, however, after around 10 minutes, the job starts throwing the following error:
java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: org.apache.avro.AvroRuntimeException: not open
com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:183)
com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner$1.outputWindowedValue(GroupAlsoByWindowFnRunner.java:102)
org.apache.beam.runners.core.ReduceFnRunner.lambda$onTrigger$1(ReduceFnRunner.java:1057)
org.apache.beam.runners.core.ReduceFnContextFactory$OnTriggerContextImpl.output(ReduceFnContextFactory.java:438)
org.apache.beam.runners.core.SystemReduceFn.onTrigger(SystemReduceFn.java:125)
org.apache.beam.runners.core.ReduceFnRunner.onTrigger(ReduceFnRunner.java:1060)
org.apache.beam.runners.core.ReduceFnRunner.onTimers(ReduceFnRunner.java:768)
com.google.cloud.dataflow.worker.StreamingGroupAlsoByWindowViaWindowSetFn.processElement(StreamingGroupAlsoByWindowViaWindowSetFn.java:95)
com.google.cloud.dataflow.worker.StreamingGroupAlsoByWindowViaWindowSetFn.processElement(StreamingGroupAlsoByWindowViaWindowSetFn.java:42)
com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.invokeProcessElement(GroupAlsoByWindowFnRunner.java:115)
com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.processElement(GroupAlsoByWindowFnRunner.java:73)
org.apache.beam.runners.core.LateDataDroppingDoFnRunner.processElement(LateDataDroppingDoFnRunner.java:80)
com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn.processElement(GroupAlsoByWindowsParDoFn.java:133)
com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:200)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1227)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:136)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:966)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: org.apache.avro.AvroRuntimeException: not open
org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:34)
org.apache.beam.sdk.io.WriteFiles$WriteShardsIntoTempFilesFn$DoFnInvoker.invokeProcessElement(Unknown Source)
org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:275)
org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:237)
com.google.cloud.dataflow.worker.StreamingSideInputDoFnRunner.processElement(StreamingSideInputDoFnRunner.java:72)
com.google.cloud.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:324)
com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:181)
com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner$1.outputWindowedValue(GroupAlsoByWindowFnRunner.java:102)
org.apache.beam.runners.core.ReduceFnRunner.lambda$onTrigger$1(ReduceFnRunner.java:1057)
org.apache.beam.runners.core.ReduceFnContextFactory$OnTriggerContextImpl.output(ReduceFnContextFactory.java:438)
org.apache.beam.runners.core.SystemReduceFn.onTrigger(SystemReduceFn.java:125)
org.apache.beam.runners.core.ReduceFnRunner.onTrigger(ReduceFnRunner.java:1060)
org.apache.beam.runners.core.ReduceFnRunner.onTimers(ReduceFnRunner.java:768)
com.google.cloud.dataflow.worker.StreamingGroupAlsoByWindowViaWindowSetFn.processElement(StreamingGroupAlsoByWindowViaWindowSetFn.java:95)
com.google.cloud.dataflow.worker.StreamingGroupAlsoByWindowViaWindowSetFn.processElement(StreamingGroupAlsoByWindowViaWindowSetFn.java:42)
com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.invokeProcessElement(GroupAlsoByWindowFnRunner.java:115)
com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.processElement(GroupAlsoByWindowFnRunner.java:73)
org.apache.beam.runners.core.LateDataDroppingDoFnRunner.processElement(LateDataDroppingDoFnRunner.java:80)
com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn.processElement(GroupAlsoByWindowsParDoFn.java:133)
com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:200)
com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1227)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:136)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:966)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.avro.AvroRuntimeException: not open
org.apache.avro.file.DataFileWriter.assertOpen(DataFileWriter.java:82)
org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:299)
org.apache.beam.sdk.io.AvroSink$AvroWriter.write(AvroSink.java:123)
org.apache.beam.sdk.io.WriteFiles.writeOrClose(WriteFiles.java:550)
org.apache.beam.sdk.io.WriteFiles.access$1000(WriteFiles.java:112)
org.apache.beam.sdk.io.WriteFiles$WriteShardsIntoTempFilesFn.processElement(WriteFiles.java:718)
The data flow job continues to run fine though. Just to give some background about the streaming job: it pulls messages from Pub/Sub, creates a fixed window of 5 minutes with a trigger of 10,000 messages ( whichever comes first), processes the messages and finally writes to a GCP bucket whereby each specific type of message goes to a specific folder based on the type of the message using .to(new AvroEventDynamicDestinations(avroBaseDir, schemaView)).
UPDATE 1: Looking at the timestamp of this error, it seems to be coming up exactly with an interval of 10 seconds, so 6 per minute.

I had exactly same exception. My problem came from wrong schema, null schema to be exact (not found by schema registry)

Related

How to deal with CoderException: cannot encode a null String with scio

I just started using scio and dataflow. Trying my code to one input file, worked fine. But when I add more files to the input, got the following exception:
java.lang.RuntimeException: org.apache.beam.sdk.coders.CoderException: cannot encode a null String
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:280)
at org.apache.beam.runners.core.SimpleDoFnRunner.outputWindowedValue(SimpleDoFnRunner.java:309)
at org.apache.beam.runners.core.SimpleDoFnRunner.access$700(SimpleDoFnRunner.java:77)
at org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:621)
at org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:609)
at com.spotify.scio.util.Functions$$anon$3.processElement(Functions.scala:158)
Caused by: org.apache.beam.sdk.coders.CoderException: cannot encode a null String
at org.apache.beam.sdk.coders.StringUtf8Coder.getEncodedElementByteSize(StringUtf8Coder.java:136)
at org.apache.beam.sdk.coders.StringUtf8Coder.getEncodedElementByteSize(StringUtf8Coder.java:37)
at org.apache.beam.sdk.coders.Coder.registerByteSizeObserver(Coder.java:291)
at com.spotify.scio.coders.RecordCoder.registerByteSizeObserver(Coder.scala:279)
at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.registerByteSizeObserver(WindowedValue.java:564)
at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.registerByteSizeObserver(WindowedValue.java:480)
at org.apache.beam.runners.dataflow.worker.IntrinsicMapTaskExecutorFactory$ElementByteSizeObservableCoder.registerByteSizeObserver(IntrinsicMapTaskExecutorFactory.java:399)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputObjectAndByteCounter.update(OutputObjectAndByteCounter.java:125)
at org.apache.beam.runners.dataflow.worker.DataflowOutputCounter.update(DataflowOutputCounter.java:64)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:43)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:272)
at org.apache.beam.runners.core.SimpleDoFnRunner.outputWindowedValue(SimpleDoFnRunner.java:309)
at org.apache.beam.runners.core.SimpleDoFnRunner.access$700(SimpleDoFnRunner.java:77)
at org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:621)
at org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:609)
at com.spotify.scio.util.Functions$$anon$3.processElement(Functions.scala:158)
at com.spotify.scio.util.Functions$$anon$3$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:275)
at org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:240)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:325)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:44)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:49)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:201)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:76)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:394)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:363)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:291)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:135)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:115)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:102)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I guess one of my input files may include some malformed data. But how to bypass the bad data? There is a similar question with Java Beam com.google.cloud.dataflow.sdk.coders.CoderException: cannot encode a null String
So I tried this:
val scText = sc.textFile(input)
scText.setCoder(NullableCoder.of(StringUtf8Coder.of()))
It didn't help. Can someone help me on this? Thanks.
The scio team provided a solution to this problem. Basically add --nullableCoders=true in command line argument.

Sonarqube 6.7 - Fail to read ISSUES.LOCATIONS, com.google.protobuf.InvalidProtocolBufferException

Having the below issue after upgrading to SonarQube 6.7. I'm using docker image for sonarqube, I just ran the migrations scripts just like suggested by Sonarquibe UI.
Can someone suggest ?
Thank you
2018.05.15 06:58:42 INFO ce[AWNimASHSOFQrk0D-LC8][o.s.c.t.CeWorkerImpl] Execute task | project=myproject:develop | type=REPORT | id=AWNimASHSOFQrk0D-LC8 | submitter=jenkins
2018.05.15 06:59:52 ERROR ce[AWNimASHSOFQrk0D-LC8][o.s.c.t.CeWorkerImpl] Failed to execute task AWNimASHSOFQrk0D-LC8
org.sonar.server.computation.task.projectanalysis.component.VisitException: Visit of Component {key=myproject:develop,type=PROJECT} failed
at org.sonar.server.computation.task.projectanalysis.component.VisitException.rethrowOrWrap(VisitException.java:44)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visit(VisitorsCrawler.java:74)
at org.sonar.server.computation.task.projectanalysis.step.ExecuteVisitorsStep.execute(ExecuteVisitorsStep.java:51)
at org.sonar.server.computation.task.step.ComputationStepExecutor.executeSteps(ComputationStepExecutor.java:64)
at org.sonar.server.computation.task.step.ComputationStepExecutor.execute(ComputationStepExecutor.java:52)
at org.sonar.server.computation.task.projectanalysis.taskprocessor.ReportTaskProcessor.process(ReportTaskProcessor.java:73)
at org.sonar.ce.taskprocessor.CeWorkerImpl.executeTask(CeWorkerImpl.java:134)
at org.sonar.ce.taskprocessor.CeWorkerImpl.findAndProcessTask(CeWorkerImpl.java:97)
at org.sonar.ce.taskprocessor.CeWorkerImpl.withCustomizedThreadName(CeWorkerImpl.java:81)
at org.sonar.ce.taskprocessor.CeWorkerImpl.call(CeWorkerImpl.java:73)
at org.sonar.ce.taskprocessor.CeWorkerImpl.call(CeWorkerImpl.java:43)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.ibatis.exceptions.PersistenceException:
Error querying database. Cause: java.lang.IllegalStateException: Fail to read ISSUES.LOCATIONS [KEE=AV9aIkjMZ-jmCeoEg-IR]
The error may exist in org.sonar.db.issue.IssueMapper
The error may involve org.sonar.db.issue.IssueMapper.scrollNonClosedByComponentUuid
The error occurred while handling results
SQL: select i.id, i.kee as kee, i.rule_id as ruleId, i.severity as severity, i.manual_severity as manualSeverity, i.message as message, i.line as line, i.locations as locations, i.gap as gap, i.effort as effort, i.status as status, i.resolution as resolution, i.checksum as checksum, i.assignee as assignee, i.author_login as authorLogin, i.tags as tagsString, i.issue_attributes as issueAttributes, i.issue_creation_date as issueCreationTime, i.issue_update_date as issueUpdateTime, i.issue_close_date as issueCloseTime, i.created_at as createdAt, i.updated_at as updatedAt, r.plugin_rule_key as ruleKey, r.plugin_name as ruleRepo, r.language as language, p.kee as componentKey, i.component_uuid as componentUuid, p.module_uuid as moduleUuid, p.module_uuid_path as moduleUuidPath, p.path as filePath, root.kee as projectKey, i.project_uuid as projectUuid, i.issue_type as type from issues i inner join rules r on r.id=i.rule_id inner join projects p on p.uuid=i.component_uuid inner join projects root on root.uuid=i.project_uuid where i.component_uuid = ? and i.status <> 'CLOSED'
Cause: java.lang.IllegalStateException: Fail to read ISSUES.LOCATIONS [KEE=AV9aIkjMZ-jmCeoEg-IR]
at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:30)
at org.apache.ibatis.session.defaults.DefaultSqlSession.select(DefaultSqlSession.java:172)
at org.apache.ibatis.session.defaults.DefaultSqlSession.select(DefaultSqlSession.java:158)
at org.apache.ibatis.binding.MapperMethod.executeWithResultHandler(MapperMethod.java:126)
at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:72)
at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:59)
at com.sun.proxy.$Proxy42.scrollNonClosedByComponentUuid(Unknown Source)
at org.sonar.server.computation.task.projectanalysis.issue.ComponentIssuesLoader.loadForComponentUuid(ComponentIssuesLoader.java:73)
at org.sonar.server.computation.task.projectanalysis.issue.ComponentIssuesLoader.loadForComponentUuid(ComponentIssuesLoader.java:51)
at org.sonar.server.computation.task.projectanalysis.issue.CloseIssuesOnRemovedComponentsVisitor.closeIssuesForDeletedComponentUuids(CloseIssuesOnRemovedComponentsVisitor.java:60)
at org.sonar.server.computation.task.projectanalysis.issue.CloseIssuesOnRemovedComponentsVisitor.visitProject(CloseIssuesOnRemovedComponentsVisitor.java:53)
at org.sonar.server.computation.task.projectanalysis.component.TypeAwareVisitorWrapper.visitProject(TypeAwareVisitorWrapper.java:47)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visitNode(VisitorsCrawler.java:120)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visitImpl(VisitorsCrawler.java:100)
at org.sonar.server.computation.task.projectanalysis.component.VisitorsCrawler.visit(VisitorsCrawler.java:72)
... 17 common frames omitted
Caused by: java.lang.IllegalStateException: Fail to read ISSUES.LOCATIONS [KEE=AV9aIkjMZ-jmCeoEg-IR]
at org.sonar.db.issue.IssueDto.parseLocations(IssueDto.java:652)
at org.sonar.db.issue.IssueDto.toDefaultIssue(IssueDto.java:721)
at org.sonar.server.computation.task.projectanalysis.issue.ComponentIssuesLoader.lambda$loadForComponentUuid$1(ComponentIssuesLoader.java:74)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.callResultHandler(DefaultResultSetHandler.java:363)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.storeObject(DefaultResultSetHandler.java:356)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleRowValuesForSimpleResultMap(DefaultResultSetHandler.java:348)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleRowValues(DefaultResultSetHandler.java:322)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleResultSet(DefaultResultSetHandler.java:298)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleResultSets(DefaultResultSetHandler.java:192)
at org.apache.ibatis.executor.statement.PreparedStatementHandler.query(PreparedStatementHandler.java:64)
at org.apache.ibatis.executor.statement.RoutingStatementHandler.query(RoutingStatementHandler.java:79)
at org.apache.ibatis.executor.ReuseExecutor.doQuery(ReuseExecutor.java:60)
at org.apache.ibatis.executor.BaseExecutor.queryFromDatabase(BaseExecutor.java:324)
at org.apache.ibatis.executor.BaseExecutor.query(BaseExecutor.java:156)
at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:109)
at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:83)
at org.apache.ibatis.session.defaults.DefaultSqlSession.select(DefaultSqlSession.java:170)
... 30 common frames omitted
Caused by: com.google.protobuf.InvalidProtocolBufferException: While parsing a protocol message, the input ended unexpectedly in the middle of a field. This could mean either that the input has been truncated or that an embedded message misreported its own length.
at com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:70)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:1068)
at com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:1135)
at com.google.protobuf.CodedInputStream.readRawVarint64SlowPath(CodedInputStream.java:778)
at com.google.protobuf.CodedInputStream.readRawVarint32(CodedInputStream.java:637)
at com.google.protobuf.CodedInputStream.readInt32(CodedInputStream.java:348)
at org.sonar.db.protobuf.DbCommons$TextRange.<init>(DbCommons.java:149)
at org.sonar.db.protobuf.DbCommons$TextRange.<init>(DbCommons.java:90)
at org.sonar.db.protobuf.DbCommons$TextRange$1.parsePartialFrom(DbCommons.java:750)
at org.sonar.db.protobuf.DbCommons$TextRange$1.parsePartialFrom(DbCommons.java:744)
at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:495)
at org.sonar.db.protobuf.DbIssues$Locations.<init>(DbIssues.java:99)
at org.sonar.db.protobuf.DbIssues$Locations.<init>(DbIssues.java:55)
at org.sonar.db.protobuf.DbIssues$Locations$1.parsePartialFrom(DbIssues.java:852)
at org.sonar.db.protobuf.DbIssues$Locations$1.parsePartialFrom(DbIssues.java:846)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:137)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:169)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:180)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:185)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
at org.sonar.db.protobuf.DbIssues$Locations.parseFrom(DbIssues.java:253)
at org.sonar.db.issue.IssueDto.parseLocations(IssueDto.java:650)
... 46 common frames omitted
2018.05.15 06:59:52 ERROR ce[AWNimASHSOFQrk0D-LC8][o.s.c.t.CeWorkerImpl] Executed task | project=myproject:develop | type=REPORT | id=AWNimASHSOFQrk0D-LC8 | submitter=jenkins | time=70255ms
Actually I solved my problem executing this query on sonar schema. I guess that was definitely related to version upgrade... not a clean solution but still ok to me as I didn't need to have historical data
delete from issues where STATUS != "CLOSED"

Google cloud streaming dataflow : Error while fetching side input

Sometimes I am getting below exception while running streaming dataflow :
exception: "java.lang.RuntimeException: Exception while fetching side input:
at com.google.cloud.dataflow.sdk.runners.worker.StateFetcher.fetchSideInput(StateFetcher.java:184)
at com.google.cloud.dataflow.sdk.runners.worker.StreamingModeExecutionContext.fetchSideInput(StreamingModeExecutionContext.java:175)
at com.google.cloud.dataflow.sdk.runners.worker.StreamingModeExecutionContext.access$400(StreamingModeExecutionContext.java:56)
at com.google.cloud.dataflow.sdk.runners.worker.StreamingModeExecutionContext$StepContext.issueSideInputFetch(StreamingModeExecutionContext.java:401)
at com.google.cloud.dataflow.sdk.runners.worker.StreamingSideInputFetcher.getReadyWindows(StreamingSideInputFetcher.java:135)
at com.google.cloud.dataflow.sdk.runners.worker.StreamingSideInputDoFnRunner.startBundle(StreamingSideInputDoFnRunner.java:49)
at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.reallyStartBundle(SimpleParDoFn.java:175)
at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.startBundle(SimpleParDoFn.java:117)
at com.google.cloud.dataflow.sdk.runners.worker.ForwardingParDoFn.startBundle(ForwardingParDoFn.java:36)
at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.start(ParDoOperation.java:45)
at com.google.cloud.dataflow.sdk.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:69)
at com.google.cloud.dataflow.sdk.runners.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:719)
at com.google.cloud.dataflow.sdk.runners.worker.StreamingDataflowWorker.access$600(StreamingDataflowWorker.java:95)
at com.google.cloud.dataflow.sdk.runners.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:538)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.google.cloud.dataflow.worker.repackaged.com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalArgumentException: Duplicate values for 2059
at com.google.cloud.dataflow.worker.repackaged.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2207)
at com.google.cloud.dataflow.worker.repackaged.com.google.common.cache.LocalCache.get(LocalCache.java:3953)
at com.google.cloud.dataflow.worker.repackaged.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4790)
at com.google.cloud.dataflow.sdk.runners.worker.StateFetcher.fetchSideInput(StateFetcher.java:175)
... 16 more
Caused by: java.lang.IllegalArgumentException: Duplicate values for 2059
at com.google.cloud.dataflow.sdk.util.PCollectionViews$MapPCollectionView.fromElements(PCollectionViews.java:291)
at com.google.cloud.dataflow.sdk.util.PCollectionViews$MapPCollectionView.fromElements(PCollectionViews.java:273)
at com.google.cloud.dataflow.sdk.util.PCollectionViews$PCollectionViewBase.fromIterableInternal(PCollectionViews.java:368)
at com.google.cloud.dataflow.sdk.runners.worker.StateFetcher$2.call(StateFetcher.java:152)
at com.google.cloud.dataflow.sdk.runners.worker.StateFetcher$2.call(StateFetcher.java:104)
at com.google.cloud.dataflow.worker.repackaged.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4793)
at com.google.cloud.dataflow.worker.repackaged.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3542)
at com.google.cloud.dataflow.worker.repackaged.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2323)
at com.google.cloud.dataflow.worker.repackaged.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2286)
at com.google.cloud.dataflow.worker.repackaged.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201)
... 19 more
Dataflow worker machine type : n1-standard-4
Worker cache memory Mb : 2048
Dataflow main Input : PubSub subscription.
I am creating sideInput from BT , and passing this sideInputs to multiple transformations.The size of my sideinput is less that 100Mb.
Thanks.
That error indicates that multiple values with the same key (2059) have been encountered, which violates the expectations for a Map-valued side input. This can happen in streaming especially if you trigger the same value multiple times. If you instead use a Multimap, it should allow you to retrieve all of the values associated with a given key.

Getting timeout exception while connecting Jira to fetch issues

I am getting below error in logs while fetching Jira issues on particular JQL. I have set a schedular to run that JQL every 30 mins to sync the retrieved issues to another tool.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:294)
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:267)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:96)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:69)
at com.asurion.Autopilot.main(Autopilot.java:104)
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at com.atlassian.httpclient.apache.httpcomponents.DefaultHttpClient$3.apply(DefaultHttpClient.java:256)
at com.atlassian.httpclient.apache.httpcomponents.DefaultHttpClient$3.apply(DefaultHttpClient.java:249)
at com.atlassian.util.concurrent.Promises$Of$2.apply(Promises.java:276)
at com.atlassian.util.concurrent.Promises$Of$2.apply(Promises.java:272)
at com.atlassian.util.concurrent.Promises$2.onFailure(Promises.java:167)
at com.google.common.util.concurrent.Futures$6.run(Futures.java:801)
at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:262)
at com.google.common.util.concurrent.ExecutionList$RunnableExecutorPair.execute(ExecutionList.java:149)
at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:134)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:193)
at com.google.common.util.concurrent.SettableFuture.setException(SettableFuture.java:68)
at com.atlassian.httpclient.apache.httpcomponents.SettableFuturePromiseHttpPromiseAsyncClient$1$3.run(SettableFuturePromiseHttpPromiseAsyncClient.java:73)
at com.atlassian.httpclient.apache.httpcomponents.SettableFuturePromiseHttpPromiseAsyncClient$ThreadLocalDelegateRunnable$1.run(SettableFuturePromiseHttpPromiseAsyncClient.java:197)
at com.atlassian.httpclient.apache.httpcomponents.SettableFuturePromiseHttpPromiseAsyncClient.runInContext(SettableFuturePromiseHttpPromiseAsyncClient.java:90)
at com.atlassian.httpclient.apache.httpcomponents.SettableFuturePromiseHttpPromiseAsyncClient$ThreadLocalDelegateRunnable.run(SettableFuturePromiseHttpPromiseAsyncClient.java:192)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.util.concurrent.TimeoutException
at com.atlassian.httpclient.apache.httpcomponents.SettableFuturePromiseHttpPromiseAsyncClient$1.doCancelled(SettableFuturePromiseHttpPromiseAsyncClient.java:67)
at com.atlassian.httpclient.apache.httpcomponents.SettableFuturePromiseHttpPromiseAsyncClient$ThreadLocalContextAwareFutureCallback$3.run(SettableFuturePromiseHttpPromiseAsyncClient.java:152)
at com.atlassian.httpclient.apache.httpcomponents.SettableFuturePromiseHttpPromiseAsyncClient.runInContext(SettableFuturePromiseHttpPromiseAsyncClient.java:90)
at com.atlassian.httpclient.apache.httpcomponents.SettableFuturePromiseHttpPromiseAsyncClient$ThreadLocalContextAwareFutureCallback.cancelled(SettableFuturePromiseHttpPromiseAsyncClient.java:147)
at org.apache.http.impl.client.cache.CachingHttpAsyncClient$2.cancelled(CachingHttpAsyncClient.java:636)
at org.apache.http.concurrent.BasicFuture.cancel(BasicFuture.java:150)
at org.apache.http.impl.nio.client.DefaultResultCallback.cancelled(DefaultResultCallback.java:57)
at org.apache.http.impl.nio.client.DefaultAsyncRequestDirector.cancel(DefaultAsyncRequestDirector.java:533)
at org.apache.http.impl.nio.client.DefaultAsyncRequestDirector$1.abortConnection(DefaultAsyncRequestDirector.java:222)
at org.apache.http.client.methods.HttpRequestBase.cleanup(HttpRequestBase.java:137)
at org.apache.http.client.methods.HttpRequestBase.abort(HttpRequestBase.java:151)
at com.atlassian.httpclient.base.RequestKiller$RequestEntry.abort(RequestKiller.java:98)
at com.atlassian.httpclient.base.RequestKiller.run(RequestKiller.java:56)
... 1 more
This error is occurring frequently but not continuous. Sometime it works, many times it won't.
Jira Version: 7.1.4#71008
Jira REST Java Client Version: 2.0.0-m2
JIRA REST java-client-core Version: 2.0.0-m25
Please let me know if you need more details.
Thanks in advance.
Regards,
Tushar
Looks like a timeout on your request, which comes and goes depending on server load. Can you streamline or break up the query, set a longer timeout, or put other activity on the server on hold during the sync?
There was some issue with Jira server. We don't have access to it, but after few days the issue seems to be gone. Not sure what has happened at Jira side.

Unable to import neo4j database with blueprints

i'm trying to open a neo4j database by using blueprints implementation, but i got the following exceptions:
Neo4jGraph graph = new Neo4jGraph("/Users/pipe/Dev/neo4j-community-2.1.0-M01/data/graph.db");
this cause
Caused by: javax.faces.el.EvaluationException: java.lang.RuntimeException: Bad value '-192M' for setting 'neostore.propertystore.db.strings.mapped_memory': value does not match expression:\d+[kmgKMG]?
at javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:102)
at com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:101)
... 32 more
Caused by: java.lang.RuntimeException: Bad value '-192M' for setting 'neostore.propertystore.db.strings.mapped_memory': value does not match expression:\d+[kmgKMG]?
at com.tinkerpop.blueprints.impls.neo4j.Neo4jGraph.<init>(Neo4jGraph.java:165)
at com.tinkerpop.blueprints.impls.neo4j.Neo4jGraph.<init>(Neo4jGraph.java:135)
at org.pipe.java.web.netnografica.persistenza.graphdb.DAONodo.toGraphml(DAONodo.java:204)
at org.pipe.java.web.netnografica.controllo.ControlloGenerale.esportaGraphml(ControlloGenerale.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.el.parser.AstValue.invoke(AstValue.java:278)
at org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:274)
at com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:105)
at javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:88)
... 33 more
Caused by: java.lang.IllegalArgumentException: Bad value '-192M' for setting 'neostore.propertystore.db.strings.mapped_memory': value does not match expression:\d+[kmgKMG]?
at org.neo4j.helpers.Settings$DefaultSetting.apply(Settings.java:782)
at org.neo4j.helpers.Settings$DefaultSetting.apply(Settings.java:702)
at org.neo4j.graphdb.factory.GraphDatabaseSetting$SettingWrapper.apply(GraphDatabaseSetting.java:215)
at org.neo4j.graphdb.factory.GraphDatabaseSetting$SettingWrapper.apply(GraphDatabaseSetting.java:189)
at org.neo4j.kernel.configuration.ConfigurationValidator.validate(ConfigurationValidator.java:50)
at org.neo4j.kernel.configuration.Config.applyChanges(Config.java:121)
at org.neo4j.kernel.InternalAbstractGraphDatabase.create(InternalAbstractGraphDatabase.java:339)
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:253)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:106)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:81)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:63)
at com.tinkerpop.blueprints.impls.neo4j.Neo4jGraph.<init>(Neo4jGraph.java:155)
... 44 more
there seems to be a need to provide a properties file. Is correct?
*Edited to answer to Michael Hunger:
Well .. I changed the version of blueprints, now is 2.5.0-SNAPSHOT, but nothing changed. So i provided the configs using the map ask asked by the constructor
Map<String, String> configurazione = new HashMap<String, String>();
configurazione.put("neostore.propertystore.db.strings.mapped_memory", "250M");
configurazione.put("neostore.propertystore.db.arrays.mapped_memory", "100M");
configurazione.put("neostore.relationshipstore.db.mapped_memory", "3845M");
configurazione.put("neostore.nodestore.db.mapped_memory", "350M");
configurazione.put("neostore.propertystore.db.mapped_memory", "350M");
configurazione.put("neostore.nodestore.db.mapped_memory", "769M");
Neo4j2Graph grafo = new Neo4j2Graph("/Users/pipe/Dev/neo4j-community-2.1.0-M01/data/graph.db", configurazione);
Now the exception is changed, and i really don't know what is wrong.. I linked in paste bin to report the complete stack.
http://pastebin.com/XpipSysp
at last a NoSuchMethodError is thrown. What am I missing?
Thanks a lot.
Which blueprints version are you using?
Blueprints 2.5-SNAPSHOT is compatible with Neo4j 2.0.0.
Please note there is a separate module for Neo4j 2.0 called blueprints-neo4j2
And the classes are called Neo4j2Graph Neo4j2Vertex etc.
You should also be able to provide config to the Neo4j2Graph.

Resources