How do I batch extract metadata from DM3 files using ImageJ? - imagej

How can you extract metadata for a batch of images? My first thought was to record a macro and then modify it to operate on a list of file names.
In that vein, I tried recording a macro doing something like this:
Ctrl-o # Open a file
12.dm3Enter # Select file to open
Ctrl-i # Open metadata in a new window
Ctrl-s # Save file
Info for 12.txtEnter# Name of file being saved
Ctrl-w# Close current window
Ctrl-w# Close current window
These steps work when I do them manually. This results in the following macro, which seems to be missing most of what I tried to record:
open("/path/to/file/12.dm3");
run("Show Info...");
run("Close");
run("Close");

Modifying a Jython script that is supposed to extract dimension metadata from an image:
from java.io import File
from loci.formats import ImageReader
from loci.formats import MetadataTools
import glob
# Create output file
outFile = open('./pixel_sizes.txt','w')
# Get list of DM3 files
filenames = glob.glob('*.dm3')
for filename in filenames:
# Open file
file = File('.', filename)
# parse file header
imageReader = ImageReader()
meta = MetadataTools.createOMEXMLMetadata()
imageReader.setMetadataStore(meta)
imageReader.setId(file.getAbsolutePath())
# get pixel size
pSizeX = meta.getPixelsPhysicalSizeX(0)
# close the image reader
imageReader.close()
outFile.write(filename + "\t" + str(pSizeX) + "\n")
# Close the output file
outFile.close()
(Gist).

You could use getImageInfo() instead of run("Show Info..."). This will create a string in the macro containing the run("Show Info...") output, but can then be modified as you like. See http://rsb.info.nih.gov/ij/developer/macro/functions.html#getImageInfo for more information.

Related

Generate files with one input to multiply outputs

I'm trying to create a code generator that takes input a JSON file and generates multiple classes in multiple files.
And my question is, is it possible to create multiple files for one input using build from dart lang?
Yes it is possible. There are currently many tools in available on pub.dev that have code generation. For creating a simple custom code generator, check out the package code_builder provided by the core Dart team.
You can use dart_style as well to format the output of the code_builder results.
Here is a simple example of the package in use (from the package's example):
import 'package:code_builder/code_builder.dart';
import 'package:dart_style/dart_style.dart';
final _dartfmt = DartFormatter();
// The string of the generated code for AnimalClass
String animalClass() {
final animal = Class((b) => b
..name = 'Animal'
..extend = refer('Organism')
..methods.add(Method.returnsVoid((b) => b
..name = 'eat'
..body = refer('print').call([literalString('Yum!')]).code)));
return _dartfmt.format('${animal.accept(DartEmitter())}');
}
In this example you can use the dart:io API to create a File and write the output from animalClass() (from the example) to the file:
final animalDart = File('animal.dart');
// write the new file to the disk
animalDart.createSync();
// write the contents of the class to the file
animalDart.writeAsStringSync(animalClass());
You can use the File API to read a .json from the path, then use jsonDecode on the contents of the file to access the contents of the JSON config.

How do you load a file (.csv) into a Beeware/Briefcase application?

I am using kivy as the GUI and Briefcase as a packaging utility. My .kv file is in the appname/project/src/projectName/resources folder. I also need a .csv file, in the same folder, and want to use pandas with it. I have no problem with importing the packages (I added them to the .toml file). I can't use the full path because when I package the app, the path will be different on each computer. Using relative paths to the app.py file does not work, giving me a file not found error. Is there a way to read a file using a relative path (maybe the source parameter in the .toml file)?
kv = Builder.load_file('resources/builder.kv')
df = pd.read_csv('resources/chemdata.csv')
class ChemApp(App):
def build(self):
self.icon = 'resources/elemental.ico'
return kv
I just encountered and solved a similar problem with Briefcase, even though I was using BeeWare's Toga GUI.
In my case, the main Python file app.py had to access a database file resources/data.csv. In the constructor of the class where I create a main window in app.py, I added the following lines (The import line wasn't there, but included here for clarification):
from pathlib import Path
self.resources_folder = Path(__file__).joinpath("../resources").resolve()
self.db_filepath = self.resources_folder.joinpath("data.csv")
Then I used self.db_filepath to successfully open the CSV file on my phone.
__file__ returns the path to the current file on whatever platform or device.

Nifi: How to concatenate flowfile to already existing tables in a directory?

This is a question about Nifi.
I made Nifi pipeline to convert flowfile with xml format to csv format.
Now, I would like to concatenate or union the converted csv flowfile to existing tables by filename (which stands for table name as well).
Simply put, my processor flow is following.
GetFile (from a particular directory) -> 2. Convert xml to csv -> 3.Update the flowfile with table name
-> 4. PutFile (to a different directory)
But, at the end of the flow, PutFile processor throws an error, saying "file with the same name already exists".
I have no ideas how flowfile can be added to existing csv table.
Any advice, tips, ideas are appreciated.
Thank you in advance.
there is no support to append file however you could use ExecuteGroovyScript to do it:
def ff=session.get()
if(!ff)return
ff.read().withStream{s->
String path = "./out_folder/${ff.filename}"
//sync on file path to avoid conflict on same file writing (hope)
synchronized(path){
new File( path ).append(s)
}
}
REL_SUCCESS << ff
if you need to work with text (reader) content rather then byte (stream) content
the following example shows how to exclude 1 header line from flow file if destination file already exists
def ff=session.get()
if(!ff)return
ff.read().withReader("UTF-8"){r->
String path = "./.data/${ff.filename}"
//sync on file path to avoid conflict on same file writing (hope)
synchronized(path){
def fout = new File( path )
if(fout.exists())r.readLine() //skip 1 line (header) only if out file already exists
fout.append(r) //append to the file the rest of reader content
}
}
REL_SUCCESS << ff

python xlrd: convert xls to csv using tempfiles. Tempfile is empty

I am downloading an xls file from the internet. It is in .xls format but I need 'Sheet1' to be in csv format. I use xlrd to make the conversion but seem to have run into an issue where the file I write to is empty?
import urllib2
import tempfile
import csv
import xlrd
url_2_fetch = ____
u = urllib2.urlopen(url_2_fetch)
wb = xlrd.open_workbook(file_contents=u.read())
sh = wb.sheet_by_name('Sheet1')
csv_temp_file = tempfile.TemporaryFile()
with open('csv_temp_file', 'wb') as f:
writer = csv.writer(f)
for rownum in xrange(sh.nrows):
writer.writerow(sh.row_values(rownum))
That seemed to have worked. But now I want to inspect the values by doing the following:
with open('csv_temp_file', 'rb') as z:
reader = csv.reader(z)
for row in reader:
print row
But I get nothing:
>>> with open('csv_temp_file', 'rb') as z:
... reader = csv.reader(z)
... for row in reader:
... print row
...
>>>
I am using a tempfile because I want to do more parsing of the content and then use SQLAlchemy to store the contents of the csv post more parsing to a mySQL database.
I appreciate the help. Thank you.
This is completely wrong:
csv_temp_file = tempfile.TemporaryFile()
with open('csv_temp_file', 'wb') as f:
writer = csv.writer(f)
The tempfile.TemporaryFile() call returns "a file-like object that can be used as a temporary storage area. The file will be destroyed as soon as it is closed (including an implicit close when the object is garbage collected)."
So your variable csv_temp_file contains a file object, already open, that you can read and write to, and will be deleted as soon as you call .close() on it, overwrite the variable, or cleanly exit the program.
So far so good. But then you proceed to open another file with open('csv_temp_file', 'wb') that is not a temporary file, is created in the script's current directory with the fixed name 'csv_temp_file', is overwritten every time this script is run, can cause security holes, strange bugs and race conditions, and is not related to the variable csv_temp_file in any way.
You should trash the with open statement and use the csv_temp_file variable you already have. You can try to .seek(0) on it before using it again with the csv reader, it should work. Call .close() on it when you are done with it and the temporary file will be deleted.

How to close a ZipFile

I'm passing a dynamic zip file location to a def from a database. I want to unzip the file to a temp location, extract the xml report file inside, apply an xslt stylesheet, copy it as an rhtml to a view directory for rendering, and delete the temp extracted xml file. The functionality is working fine (the rhtml file is overwritten each time and renders) except it is extracting from the same parent zip for each execution and the extracted xml can not be deleted which leads me to believe that the first execution is not closing the parent zip (releasing its handle). Therefore, subsequent executions are extracting the xml from the first zip executed. I've tried "Zip::ZipFile.close", "zipFile = Zip::ZipFile.open(fileLocation); zipFile.close","File.close(fileLocation)", and other permutations.
Any help would be appreciated.
Can you pass a block to Zip::ZipFile.open? This will close it when the block exits:
Zip::ZipFile.open(file_name) do |zip_file|
zip_file.extract('report.xml', '/tmp')
end
# zip file is closed at this point
# apply_xslt
# copy rhtml to app/views/...
# etc
== EDIT ==
Based on your comments, here's a working example:
require 'rubygems'
require 'zip/zip'
require 'fileutils'
zip_file_name = 'test.zip'
out_dir = 'tmp_for_zip'
FileUtils.mkdir_p out_dir
Zip::ZipFile.open(zip_file_name) do |zip_file|
report_name = File.basename(zip_file.name).gsub('zip', 'xml')
out = File.join(out_dir, report_name)
zip_file.extract(report_name, out) unless File.exists?(out)
puts "extracted #{report_name} to #{out}"
end
Also, I don't know if you are running a unix, but you can use lsof (list open files) to find out if the file is actually open:
lsof | grep your_file_name

Resources