I am trying to read an Avro file with embedded schema using the following command:
avro-tools tojson data.avro
I am getting the following exception though
22/11/08 14:34:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" org.apache.avro.AvroRuntimeException: java.io.EOFException
at org.apache.avro.file.DataFileStream.next(DataFileStream.java:238)
at org.apache.avro.tool.DataFileReadTool.run(DataFileReadTool.java:98)
at org.apache.avro.tool.Main.run(Main.java:67)
at org.apache.avro.tool.Main.main(Main.java:56)
Caused by: java.io.EOFException
at org.apache.avro.io.BinaryDecoder.ensureBounds(BinaryDecoder.java:542)
at org.apache.avro.io.BinaryDecoder.readInt(BinaryDecoder.java:173)
at org.apache.avro.io.BinaryDecoder.readBytes(BinaryDecoder.java:332)
at org.apache.avro.io.ResolvingDecoder.readBytes(ResolvingDecoder.java:242)
at org.apache.avro.generic.GenericDatumReader.readBytes(GenericDatumReader.java:544)
at org.apache.avro.generic.GenericDatumReader.readBytes(GenericDatumReader.java:535)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:194)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:161)
at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:260)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:248)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:180)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:161)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:154)
at org.apache.avro.file.DataFileStream.next(DataFileStream.java:251)
at org.apache.avro.file.DataFileStream.next(DataFileStream.java:236)
... 3 more
The file is supposed to have the schema embedded and one object inside.
Conduktor is able to read the file, but avro tools isn't.
The file was generated using the following code:
val outputStream = ByteArrayOutputStream()
val writer = AvroDataOutputStream<GenericRecord>(outputStream, { it }, data.schema, CodecFactory.nullCodec())
writer.write(data)
writer.flush()
writer.close()
return outputStream.toByteArray()
How can I view this file using command line?
It would be nice to suppress or fix the avro-tools warning also.
Related
I migrated my old project from some SimpleLogger to Log4J. I hate Log4j because it is extremely difficult to set up.
Now, how do I feed some config into it?
Configuration:
status:info
name: YAMLConfigTest
thresholdFilter:
level: info
Appenders:
Console:
name: STDOUT
target: SYSTEM_OUT
PatternLayout:
Pattern: "%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"
I tried loading it using the log4j2.configurationFile variable. But log4j always tries to interpret the file as XML. Why is that? Is there anything that I can do about it?
ERROR StatusLogger Error parsing /fullpath/log4j2.yaml
org.xml.sax.SAXParseException; systemId: file:///fullpath/log4j2.yaml; lineNumber: 1; columnNumber: 1; Content ist nicht zulässig in Prolog.
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:257)
...
at org.apache.logging.log4j.LogManager.getContext(LogManager.java:196)
at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:599)
at de.tisje.dendro.plot.swing.Kurvenplot.<clinit>(Kurvenplot.java:91)
Alternative solution would be to put the file somewhere where it can be found automatically.
Where do I put the file in a standard eclipse/gradle project?
do I have to tell gradle how to handle it?
thanks
The XML ConfigurationFactory is a "catch-all" factory: it tries to read those files that were rejected by other configuration factories.
While the YAML configuration factory is distributed with log4j-core, it requires additional runtime dependencies to be active (this is mentioned briefly in the documentation). You need to add jackson-dataformat-yaml to your project.
I used tdbload2 (in jena3.2.0) to build database from freebase-rdf-latest.
An error occured,
[cc#localhost dir]$ ../apache-jena-3.2.0/bin/tdbloader2 --loc=./d-freebase/ ./freebase-rdf-latest
21:33:22 INFO -- TDB Bulk Loader Start
21:33:22 INFO Data Load Phase
21:33:22 INFO Got 1 data files to load
21:33:22 INFO Data file 1: freebase-rdf-latest
INFO Load: freebase-rdf-latest -- 2017/04/06 21:33:23 CST
org.apache.jena.riot.RiotException: Failed to determine the content type: (URI=freebase-rdf-latest : stream=null)
at org.apache.jena.riot.RDFDataMgr.process(RDFDataMgr.java:854)
at org.apache.jena.riot.RDFDataMgr.parse(RDFDataMgr.java:667)
at org.apache.jena.riot.RDFDataMgr.parse(RDFDataMgr.java:637)
at org.apache.jena.riot.RDFDataMgr.parse(RDFDataMgr.java:626)
at org.apache.jena.riot.RDFDataMgr.parse(RDFDataMgr.java:617)
at org.apache.jena.tdb.store.bulkloader2.ProcNodeTableBuilder.exec(ProcNodeTableBuilder.java:78)
at tdb.bulkloader2.CmdNodeTableBuilder.exec(CmdNodeTableBuilder.java:113)
at jena.cmd.CmdMain.mainMethod(CmdMain.java:93)
at jena.cmd.CmdMain.mainRun(CmdMain.java:58)
at jena.cmd.CmdMain.mainRun(CmdMain.java:45)
at tdb.bulkloader2.CmdNodeTableBuilder.main(CmdNodeTableBuilder.java:61)
21:33:23 ERROR Failed during data phase
But when I used tdbload2 in jena 2.12.1, no error occured!
So, I wonder why RiotException was raised when I used tdbload2 in jena 3.2.0?
The format of freebase-rdf-latest,
<http://rdf.freebase.com/ns/american_football.football_player.footballdb_id> <http://rdf.freebase.com/ns/type.object.type> <http://rdf.freebase.com/ns/type.property> .
<http://rdf.freebase.com/ns/american_football.football_player.footballdb_id> <http://rdf.freebase.com/ns/type.object.name> "footballdb ID"#en .
<http://rdf.freebase.com/ns/american_football.football_player.footballdb_id> <http://rdf.freebase.com/ns/type.property.unique> "true" .
I'm not sure about Jena version 2.12.1, but I've found with Jena version 3.2.0, I need to specify the content type using the file extension.
Try renaming your input file from "./freebase-rdf-latest" to "./freebase-rdf-latest.nt".
Receiving Unhandled error - org.apache.flume.node.PollingPropertiesFileConfigurationProviders$FileWatchRunnable.run(PollingPropertiesFileConfigurationProvider.java
I am trying to extract Tweets into HDFS using Flume. But getting org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatchRunnable.run(PollingPropertiesFileConfigurationProvider.java) Unhandled Error
Configuration, Env.Sh file and .bashrc file details are included in the attached file. File also aontains the Twitter4j version (4j.0
Please have a look and suggest the Resolution.
DETAILS ATTACHED....
I was getting the same error with Flume-1.6 version. In flume.conf, update Twitter source type as below:
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
I didn't use flume-sources-1.0.SNAPSHOT.jar
Thanks.
I deployed a Grails 3.2.0 WAR on Tomcat 8.5.6 and JDK 1.8.0_91 with a simple controller having following code:
package com.test
class MailController {
static responseFormats = ['json']
def index() {
Map headers = (request.headerNames as List).collectEntries { // It fails on this line
return [(it): request.getHeader(it)]
}
println "Incoming email $headers"
render status: 200
}
}
This code fails with the following exception:
Caused by: java.lang.NoClassDefFoundError: groovy/lang/GroovyObject
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.lang.ClassLoader.defineClass(ClassLoader.java:642)
at groovy.util.ProxyGenerator.instantiateDelegateWithBaseClass(ProxyGenerator.java:225)
at groovy.util.ProxyGenerator.instantiateDelegateWithBaseClass(ProxyGenerator.java:193)
at groovy.util.ProxyGenerator.instantiateDelegate(ProxyGenerator.java:185)
at groovy.util.ProxyGenerator.instantiateDelegate(ProxyGenerator.java:181)
at org.grails.web.converters.ConverterUtil.invokeOriginalAsTypeMethod(ConverterUtil.java:161)
at org.grails.web.converters.ConvertersExtension.asType(ConvertersExtension.groovy:56)
at com.test.MailController.index(MailController.groovy:7)
at org.grails.core.DefaultGrailsControllerClass$MethodHandleInvoker.invoke(DefaultGrailsControllerClass.java:222)
at org.grails.core.DefaultGrailsControllerClass.invoke(DefaultGrailsControllerClass.java:187)
at org.grails.web.mapping.mvc.UrlMappingsInfoHandlerAdapter.handle(UrlMappingsInfoHandlerAdapter.groovy:90)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:963)
... 14 common frames omitted
Caused by: java.lang.ClassNotFoundException: groovy.lang.GroovyObject
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
... 27 common frames omitted
Before building the WAR file, I've changed the embedded tomcat to provided in build.gradle and also commented the groovy-ant dependency related to grails-core#10196
I see a answer here but that didn't worked and the above code is working fine when we run via grails run-app.
Update
I shorted down the issue. It is failing on this part only request.headerNames as List
I am pretty sure the problem is with the use of "as List". Mostly because Grails will overwrite Groovy's asType implementation which makes the "as X" coercion syntax work.
Grails does this to add support for things like JSON for marshalling known Grails types to web transport formats.
Unfortunately, in doing so Grails also breaks any asType function you might have declared yourself. Or in this case Groovy itself already declared for converting an Enumeration into a List.
It's quite annoying as Grails is effectively breaking existing contracts here and forcing you to modify upstream code to allow it to run on Grails.
That or dump Grails because it doesn't play nice with perfectly valid Groovy code.
I believe replacing "as List" with .asType(List) won't even fix the issue as you're still invoking the same code. At best you could try .collect([]) {it} instead. It may not be necessary to add the empty array as the first argument to collect.
when deploying the following webservice
#WebService(serviceName = "TestService")
#SOAPBinding(use = Use.LITERAL, style = Style.DOCUMENT, parameterStyle= SOAPBinding.ParameterStyle.WRAPPED)
public class KekeDummyWebservice implements kekeService {...
on one of my servers I do get the following error:
javax.wsdl.WSDLException: WSDLException:faultCode=CONFIGURATION_ERROR: Unsupported Java encoding for writing wsdl file: 'ISO8859_15'.
I don't know where the 'ISO8859_15' encoding comes from. The wildfly prints out
-Dfile.encoding=ISO-8859-15
while starting. Another point is that during wildfly start the warning
[jacorb.codeset] (MSC service thread 1-7) Warning - unknown codeset (ISO8859_15) - defaulting to ISO-8859-1
can be seen.
Thanks
I had similar issue on our system, with following error in the log during deployment on Wildlfy10:
Caused by: javax.wsdl.WSDLException: WSDLException: faultCode=CONFIGURATION_ERROR: Unsupported Java encoding for writing wsdl file: 'Cp1252'
Finally it was solved by resaving of xsd/wsld in utf-8 and setting targetNamespace in WS implementaion accordingly to what was defined in xsd/wsld. This aproach is not fully applicable on your case, but maybe it helps.
Try to run command "locale". It will give "LANG=en_US.ISO-8859-15".
Update this LANG to use "en_US.UTF-8" using export.
This solved the issue.