Issue with loading Parquet data into Snowflake Cloud Database when written with v1.11.0 - avro

I am new to Snowflake, but my company has been using it successfully.
Parquet files are currently being written with an existing Avro Schema, using Java parquet-avro v1.10.1.
I have been updating the dependencies in order to use latest Avro, and part of that bumped Parquet to 1.11.0.
The Avro Schema is unchanged. However when using the COPY INTO Snowflake command, I receive a LOAD FAILED with error: Error parsing the parquet file: Logical type Null can not be applied to group node but no other error details :(
The problem is that there are no null columns in the files.
I've cut the Avro schema down, and found that the presence of a MAP type in the Avro schema is causing the issue.
The field is
{
"name": "FeatureAmounts",
"type": {
"type": "map",
"values": "records.MoneyDecimal"
}
}
An example of the Parquet schema using parquet-tools.
message record.ResponseRecord {
required binary GroupId (STRING);
required int64 EntryTime (TIMESTAMP(MILLIS,true));
required int64 HandlingDuration;
required binary Id (STRING);
optional binary ResponseId (STRING);
required binary RequestId (STRING);
optional fixed_len_byte_array(12) CostInUSD (DECIMAL(28,15));
required group FeatureAmounts (MAP) {
repeated group map (MAP_KEY_VALUE) {
required binary key (STRING);
required fixed_len_byte_array(12) value (DECIMAL(28,15));
}
}
}
The 2 files I have, written in parquet 1.10.1 and 1.11.0 output this identical schema.
I have also tried with a bigger schema example, and it appears everything works fine if there is no "map" avro type present in the schema. I have other massive files with huge schemas, many union types that convert to groups in parquet, but all are written and read successfully when they don't contain any "map" types.
But as soon as I add back the "map" type then I get that weird error message from Snowflake when trying to ingest the 1.11.0 version (however 1.10.1 version will load successfully). But parquet-tools with 1.11.0, 1.10.1 etc can still read the files.
I understand that from this comment that there are changes to the Logical Types in Parquet 1.11.0, but that it is supposed to be compatibile still for old versions to read.
But does anyone know what version of Parquet is used by Snowflake to parse these files? Is there something else that could be going on here?
Appreciate any assistance

Logical type Null can not be applied to group node
Looking up the error above, it appears that a version of Apache Arrow's parquet libraries is being used to read the file.
However, looking closer, the real problem lies in the use of legacy types within the Avro based Parquet Writer implementation (the following assumes Java was used to write the files).
The new logicalTypes schema metadata introduced in Parquet defines many types including a singular MAP type. Historically, the former convertedTypes schema field supported use of MAP AND MAP_KEY_VALUE for legacy readers. The new writers that use logicalTypes (1.11.0+) should not be using the legacy map type anymore, but work hasn't been done yet to update the Avro to Parquet schema conversions to drop the MAP_KEY_VALUE types entirely.
As a result, the schema field for MAP_KEY_VALUE gets written out with an UNKNOWN value of logicalType, which trips up Arrow's implementation that only understands logicalType values of MAP and LIST (understandably).
Consider logging this as a bug against the Apache Parquet project to update their Avro writers to stop nesting the legacy MAP_KEY_VALUE type when transforming an Avro schema to a Parquet one. It should've ideally been done as part of PARQUET-1410.
Unfortunately this is hard-coded behaviour and there are no configuration options that influence map-types that can aid in producing a correct file for Apache Arrow (and for Snowflake by extension). You'll need to use an older version of the writer until a proper fix is released by the Apache Parquet developers.

Related

How to validate format of an Avro schema file to see if it conforms to Apache Avro specification

Our system must process Avro schemas. Before sending Avro schema file to the server, I want to validate the format of the submitted schema file, to see if it conforms to the Apache Avro specification.
The Avro schema is a Json file, so to do basic validation against the Avro specification, I need a Json schema for the Avro schema file (I know that sounds confusing). Unfortunately, the Apache Avro specification does not provide any definition file for the Avro schema which I could run through a validator.
Does anybody know where I can find a Json Schema defining the structure of the Avro schema file according to the Apache Avro specification?
If you have an Avro file, that file contains the schema itself, and therefore would already be "valid". If the file cannot be created with the schema you've given, then you should get an exception (or, at least, any invalid property would be ignored)
You can get that schema via
java -jar avro-tools.jar getschema file.avro
I'm not aware of a way to use a different schema to get a file without going through the Avro client library reader methods
#Test
void testSchema() throws IOException {
Schema classSchema = FooEvent.getClassSchema();
Schema sourceSchema = new Schema.Parser()
.parse(getClass()
.getResourceAsStream("/path/to/FooEvent.avsc"));
assertThat(classSchema).isEqualTo(sourceSchema);
}

How to convert Db2 query result set to an XML file based on a given XSL using IBM DataStage?

Trying to covert a Db2 query result set to an xml file based on xsl. Can we use the below pattern?
DB2 Connector -> XML_Transformer Stage (imported xsl) - XML _Output Stage.
Thanks...R
There are multiple options assuming you do not have XML already in your Db2 table,
you do not need the XML Transformer.
I strongly suggest you use the modern Hierarchical stage (also known as XML stage depending on the version of DataStage) so I would go for following structure if you want a file or files as a target.
Db2 Connect -> Hierarchical stage -> Sequential File stage
In addition, Db2 offers lots of XML functionality to generate XML by using SQL or XQuery.

Apache Beam and avro : Create a dataflow pipeline without schema

I am building a dataflow pipeline with Apache beam. Below is the pseudo code:
PCollection<GenericRecord> rows = pipeline.apply("Read Json from PubSub", <some reader>)
.apply("Convert Json to pojo", ParDo.of(new JsonToPojo()))
.apply("Convert pojo to GenericRecord", ParDo.of(new PojoToGenericRecord()))
.setCoder(AvroCoder.of(GenericRecord.class, schema));
I am trying to get rid of setting the coder in the pipeline as schema won't be known at pipeline creation time (it will be present in the message).
I commented out the line that sets the coder and got an Exception saying that default coder is not configured. I used one argument version of of method and got the following Exception:
Not a Specific class: interface org.apache.avro.generic.GenericRecord
at org.apache.avro.specific.SpecificData.createSchema(SpecificData.java:285)
at org.apache.avro.reflect.ReflectData.createSchema(ReflectData.java:594)
at org.apache.avro.specific.SpecificData$2.load(SpecificData.java:218)
at org.apache.avro.specific.SpecificData$2.load(SpecificData.java:215)
at avro.shaded.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
at avro.shaded.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
at avro.shaded.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at avro.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
... 9 more
Is there any way for us to supply the coder at runtime, without knowing the schema beforehand?
This is possible. I recommend the following approach:
Do not use an intermediate collection of type GenericRecord. Keep it as a collection of your POJOs.
Write some transform that extracts the schema of your data and makes it available as a PCollectionView<however you want to represent the schema>.
When writing to BigQuery, write your PCollection<YourPojo> via write().to(DynamicDestinations), and when writing to Avro, use FileIO.write() or writeDynamic() in combination with AvroIO.sinkViaGenericRecords(). Both of these can take a dynamically computed schema from a side input (that you computed above).

IBM Integration Bus and xsd:anyType

I'm working with IIB v9 mxsd message definitions. I'd like to define one of the XML elements to be of type xsd:anyType. However, in the list of types I can choose from, only anySimpleType and anyUri are possible (besides all other types like string, integer, etc.).
How can I get around this limitation?
The XMLNSC parser supports the entire XML Schema specification, including xs:any and xs:anyType. In IIBv9 you should create a Library and import your xsds into it. Link your Application to the Library and the XMLNSC parser will find and use the model. You do not need to specify the name of the Library in the node properties; the XSD model will be automatically available to the entire application.
You do not need to use a message set at all in IIBv9 and later versions.
The mxsd file format is used only by the MRM (not DFDL) parser.
You shouldn't use an MXSD to model your XML data, use a normal XSD.
MXSD is for modelling data for the DFDL parser, but you should use the XMLNSC parser for XML messages and define them in XSDs, in which you can use anyType.
As far as I know DFDL doesn't support anyType.

Deedle - what's the schema format for readCsv

I was using Deedle in F# to read a txt file (no header) to data frame, and cannot find any example about how to specify the schema.
let df= Frame.ReadCsv(datafile, separators="\t", hasHeaders=false, schema=schema)
I tried to give a string with names separated by ',', but seems don't work.
let schema = #"name, age, address";
I did some search on the doc, but only find following - don't know where I can find the info. :(
schema - A string that specifies CSV schema. See the documentation
for information about the schema format.
The schema format is the same as in the CSV type provider in F# Data.
The only problem (quite important!) is that the Deedle library had a bug where it completely ignores the schema parameter, so no matter what you provide, it would be ignored.
I just submitted a pull request that fixes the bug and also includes some examples (in the form of unit tests). See the pull request here (and click on "Files changed" to see the samples).
If you do not want to wait for a new release, just get the code from my GitHub fork and build it using build.cmd in the root (run this for the first time to restore packages). The complete build requires local installation of R (because it builds R plugin too), but it should build Deedle.dll and then fail... (After the first run of build.cmd, you can just use Deedle.sln solution).

Resources