I'm looking for a simple method (preferably C/C++/Python or even a shell command) to save RAW CFA data from array to a DNG file.
Something that would be like the following example
in android https://developer.android.com/reference/android/hardware/camera2/DngCreator but for a Linux machine
Related
Hallo can someone tell me in what format my input data has to be. Now I have it in csv format with the first column being the target variable but I always get a Algorithm Error which I think is due to wrong input data format.
trainpath = sess.upload_data(
path='revenue_train.csv', bucket=bucket,
key_prefix='production')
testpath = sess.upload_data(
path='revenue_test.csv', bucket=bucket,
key_prefix='production')
# launch training job, with asynchronous call
sklearn_estimator.fit({'train':trainpath, 'test': testpath}, wait=False)
when you use a custom Docker or framework estimator (like you do) you can use any file format (csv, pdf, mp4, whatever you have in S3). The Sklearn container and estimator are agnostic of the file format ; it is the role of your user-provided Python code in the estimator to know how to read those files.
Am working on Qccpack for Hyperspectral image compression which uses .icb extension.
How can I convert from ENVI .hdr to .icb in order to work with Qccpack ?
I just had a quick look into the Qccpack documentation. (the first thing I found via google, I guess this is what you are talking about)
http://qccpack.sourceforge.net/Documentation/QccIMGImageCubeFree.3.html
.icb is a file that stores "image cubes". They say that image cubes are a data structure for saving volumetric image data.
ENVI .hdr instead is a file format that stores meta data for an image that is stored in another file.
You cannot convert image meta data into image data.
Is there anyway (commandline tools) to calculate MD5 hash for .NEF (also .CR2, .TIFF) regardless any metadata, e.g. EXIF, IPTC, XMP and so on?
The MD5 hash should be same once we update any metadata inside the image file.
I searched for a while, the closest solution is:
exiftool test.nef -all= -o - -m | md5
but 'exiftool -all=' still keeps a set of EXIF tags in the output file. The MD5 hash can be changed if I update remaining tags.
ImageMagick has a method for doing exactly this. It is installed on most Linux distros and is available for OSX (ideally via homebrew) and also Windows. There is an escape for the image signature which includes only pixel data and not metadata - you use it like this:
identify -format %# _DSC2007.NEF
feb37d5e9cd16879ee361e7987be7cf018a70dd466d938772dd29bdbb9d16610
I know it does what you want and that the calculated checksum does not change when you modify the metadata on PNG files for example, and I know it does calculate the checksum correctly for CR2 and NEF files. However, I am not in the habit of modifying RAW files such as you have and have not tested it does the right thing in that case - though I would be startled if it didn't! So please test before use.
The reason that there is still some Exif data left is because the image data for a NEF file (and similar TIFF based filetypes) is located within that Exif block. Remove that and you have removed the image data. See ExifTool FAQ 7, which has an example shortcut tag that may help you out.
I assume your intention is to verify the actual image data has not been tampered with.
An alternate approach to stripping the meta-data can be to convert the image to a format that has no metadata.
ImageMagick is a well known open source (Apache 2 license) for image manipulation and conversion. It provides libraries with various language bindings as well as command line tools for various operating systems.
You could try:
convert test.nef bmp:- | md5
This converts test.nef to bmp on stdout and pipes it to md5.
AFAIR bmp has no support for metadata and I'm not sure if ImageMagick even preserves metadata across conversions.
This will only work with single image files (i.e. not multi-image tiff or gif animations). There is also the slight possibility some changes can be made to the image which result in the same conversion because of color space conversions, but these changes would not be visible.
I would like to load a few data from .txt file into the testbench as input in order to run the simulation, but the data I wish to load in are real numbers.
For example:
0.00537667139546100
0.0182905843325460
-0.0218392122072903
0.00794853052089004
I found that $readmemh, or $readmemb are meant for hex or binary. Is there any method that can help me to load the data without convert it to binary or hex before loading it to the testbench?
$readmemh and $readmemb are meant to load data into memory. As you mentioned, these functions requires hex or binary data. If you simply want to use some data read from file, you can use $fscanf function with %f format set, i.e.:
$fscanf(file,"%f ",real_num);
I have the following code:
byte[] b = new byte[len]; //len is preset to 157004 in this example
//fill b with data by reading from a socket
String pkt = new String(b);
System.out.println(b.length + " " + pkt.length());
This prints out two different values on Ubuntu; 157004 and 147549, but the same values on OS X. This string is actually an image being transmitted by the ImageIO library. Thus, on OS X I am able to decode the string into an image just fine, but on Ubuntu I am not able to.
I am using version 1.6.0_45 on OS X, and tried the same version on Ubuntu, in addition to Oracle jdk 7 and the default openjdk.
I noticed that I can get the string length to equal the byte array length by decoding with Latin-1:
String pkt = new String(b,"ISO-8859-1");
However this does not make it possible to decode the image, and understanding what's going on can be difficult as the string looks like garbage to me.
I'm perplexed by the fact that I'm using the same jdk version, but a different OS.
This string is actually an image being transmitted by the ImageIO library.
And that's where you're going wrong.
An image is not text data - it's binary data. If you really need to encode it in a string, you should use base64. Personally I like the public domain base64 encoder/decoder at iharder.net.
This isn't just true for images - it's true for all binary data which isn't known to be text in a particular encoding... whether that's sound, movies, Word documents, encrypted data etc. Never just treat it as if it were just encoded text - it's a recipe for disaster.
Ubuntu uses utf-8 by default, which is a variable length encoding so the lengths of the string and byte data differ. This is the source of the difference, but for the solution I defer to Jon's answer.