how to understand the concept of Object Container Files in avro? - avro

I'm quite confused about the concept of Object Container Files in Avro.
https://avro.apache.org/docs/current/spec.html#Object+Container+Files
Does Object Container Files mean the files which produced by Avro when serializing the data? Avro persist the serialized data into one or more files, does this file call Object Container Files?

If you're to store Avro files on disk, those are represented by the Container file specifications mentioned there.
The files contain binary data, after data is serialized
One file contains a schema and many serialized records matching that schema

Related

GeoDjango Admin how to populate a GeometryField from a file upload?

I have a project that uses GeoDjango to store GPS routes. The geometry is stored in a GeometryField. This works great when data is imported with geospatial information, but it is frustrating when I have a model which needs user-supplied data. I would like to have a widget in the Admin that will let me upload a file, and then use that file to essentially import the geospatial information.
The FileField field doesn't seem appropriate, since I don't want the file stored on the file system. I want it processed and stored in the geospatial DB field so I can run geospatial functions on the data.
Ideally the admin interface would contain a file upload widget and the geospatial field, shown with the typical map.
There are a couple of options for importing geo data files into DB.
If you want to use a zipped shapefile, Geodjango comes with a nice solution, LayerMapping.
Before importing the file, you should implement the workflow for uploading zip file with a form, checking the required extensions ([".shp", ".shx", ".dbf", ".prj"]) and saving the files for reading.
Then you have to define a mapping to match field names across the file and Django model.
After you completed these steps, you can save the geometries into the DB with:
from django.contrib.gis.utils.layermapping import LayerMapping
layer = uploaded_and_extracted_file
mapping = {"id": "district", "name": "dis_name", "area": "shape_area", "geom": "MULTIPOLYGON"}
lm = LayerMapping(ModelName, layer, mapping, transform=True, encoding="utf-8")
lm.save(verbose=True, strict=True, silent=True)

Issue with loading Parquet data into Snowflake Cloud Database when written with v1.11.0

I am new to Snowflake, but my company has been using it successfully.
Parquet files are currently being written with an existing Avro Schema, using Java parquet-avro v1.10.1.
I have been updating the dependencies in order to use latest Avro, and part of that bumped Parquet to 1.11.0.
The Avro Schema is unchanged. However when using the COPY INTO Snowflake command, I receive a LOAD FAILED with error: Error parsing the parquet file: Logical type Null can not be applied to group node but no other error details :(
The problem is that there are no null columns in the files.
I've cut the Avro schema down, and found that the presence of a MAP type in the Avro schema is causing the issue.
The field is
{
"name": "FeatureAmounts",
"type": {
"type": "map",
"values": "records.MoneyDecimal"
}
}
An example of the Parquet schema using parquet-tools.
message record.ResponseRecord {
required binary GroupId (STRING);
required int64 EntryTime (TIMESTAMP(MILLIS,true));
required int64 HandlingDuration;
required binary Id (STRING);
optional binary ResponseId (STRING);
required binary RequestId (STRING);
optional fixed_len_byte_array(12) CostInUSD (DECIMAL(28,15));
required group FeatureAmounts (MAP) {
repeated group map (MAP_KEY_VALUE) {
required binary key (STRING);
required fixed_len_byte_array(12) value (DECIMAL(28,15));
}
}
}
The 2 files I have, written in parquet 1.10.1 and 1.11.0 output this identical schema.
I have also tried with a bigger schema example, and it appears everything works fine if there is no "map" avro type present in the schema. I have other massive files with huge schemas, many union types that convert to groups in parquet, but all are written and read successfully when they don't contain any "map" types.
But as soon as I add back the "map" type then I get that weird error message from Snowflake when trying to ingest the 1.11.0 version (however 1.10.1 version will load successfully). But parquet-tools with 1.11.0, 1.10.1 etc can still read the files.
I understand that from this comment that there are changes to the Logical Types in Parquet 1.11.0, but that it is supposed to be compatibile still for old versions to read.
But does anyone know what version of Parquet is used by Snowflake to parse these files? Is there something else that could be going on here?
Appreciate any assistance
Logical type Null can not be applied to group node
Looking up the error above, it appears that a version of Apache Arrow's parquet libraries is being used to read the file.
However, looking closer, the real problem lies in the use of legacy types within the Avro based Parquet Writer implementation (the following assumes Java was used to write the files).
The new logicalTypes schema metadata introduced in Parquet defines many types including a singular MAP type. Historically, the former convertedTypes schema field supported use of MAP AND MAP_KEY_VALUE for legacy readers. The new writers that use logicalTypes (1.11.0+) should not be using the legacy map type anymore, but work hasn't been done yet to update the Avro to Parquet schema conversions to drop the MAP_KEY_VALUE types entirely.
As a result, the schema field for MAP_KEY_VALUE gets written out with an UNKNOWN value of logicalType, which trips up Arrow's implementation that only understands logicalType values of MAP and LIST (understandably).
Consider logging this as a bug against the Apache Parquet project to update their Avro writers to stop nesting the legacy MAP_KEY_VALUE type when transforming an Avro schema to a Parquet one. It should've ideally been done as part of PARQUET-1410.
Unfortunately this is hard-coded behaviour and there are no configuration options that influence map-types that can aid in producing a correct file for Apache Arrow (and for Snowflake by extension). You'll need to use an older version of the writer until a proper fix is released by the Apache Parquet developers.

What is mean JSON file generate by Azcopy

When I using Azcopy v7.3 to copy Table Storage. I receive 2 files JSON and manifest. Name of JSON file will be generated with the format myfilename_XXXXXXX.When I rename JSON file Azcopy throw exception. I really want to know how to XXXXXXX will be generated and how can file JSON file map with the manifest file.
Thanks for your help
The suffix is the CRC64 calculated by the entities content in this JSON file, and the manifest file stores the total CRC64 aggregated by all the JSON files. This is to ensure that file list is complete and each JSON file isn't corrupted respectively.

Adtf dat files - streams and structure types

ADTF dat file contains streams of data. In the .dat file there is only a stream name. To find the structure of the stream one has to go through DDL .description file.
Sometimes the .description files are incomplete or are missing link from stream name to corresponding structure.
Is there some additional information about structure name hidden in the .dat file itself? (Or my understanding is completely wrong?)
You must differ between ADTF 2.x and ADTF 3.x and their (adtf)dat file structure.
ADTF 2.x:
You are right, you can only interpret data with ddl. The stream must point to a structure described in Media Description.
Sometimes the .description files are incomplete or are missing link
from stream name to corresponding structure.
You can avoid this by enable the Option Create Media Description in Harddisk Recorder. Then a *.dat.description will be stored next to the same-titled *.dat file, which contains the correct stream and structure reference, because it was available during recording.
Is there some additional information about structure name hidden in the .dat file itself?
No, it is only the stream name. So you need to know the data structure behind to interpret. If you have the header (c-struct), you can also convert to ddl and refer to that.
ADTF 3.x:
To avoid these problems for not available or incorrect description files, the DDL is now stored in the *.adtfdat file in ADTF 3.x

See the contents of Checkpoint files?

Per documentation, variables in a session can be saved and restored to/from a binary file with tf.train.Saver object.
But is there any way to see the content of the binary file?
A checkpoint file is an sstable. The value for each record is a serialized
SavedTensorSlices message. (resource here)
To see the content of the serialized SavedTensorSlices message, we just unerialize the content into a SavedTensorSlices object. Something like below:
SavedTensorSlices message;
message.ParseFromString(value);
cout << message.DebugString();
The files are read/written using TensorSliceReader and TensorSliceWriter in C++ using what seems to be a special format for tensor data.
The files contain the values of the saved tensors. The simplest way to inspect those values would be to restore the tensors from the checkpoint file and inspect the tensors directly.

Resources