I am not a coder but trying to turn ThunderSTORM's batch process into an automated one where I have a single input folder and a single output folder.
input_directory = newArray("C:\\Users\\me\\Desktop\\Images");
output_directory = ("C:\\Users\\me\\Desktop\\Results");
for(i = 0; i < input_directory.length; i++) {
open(input_directory[i]);
originalName = getTitle();
originalNameWithoutExt = replace( originalName , ".tif" , "" );
fileName = originalNameWithoutExt;
run("Run analysis", "filter=[Wavelet filter (B-Spline)] scale=2.0 order=3 detector "+
"detector=[Local maximum] connectivity=8-neighbourhood threshold=std(Wave.F1) "+
"estimator=[PSF: Integrated Gaussian] sigma=1.6 method=[Weighted Least squares] fitradius=3 mfaenabled=false "+
"renderer=[Averaged shifted histograms] magnification=5.0 colorizez=true shifts=2 "+
"repaint=50 threed=false");
saveAs(fileName+"_Results", output_directory);
}
This probably looks like a huge mess but the original batch file used arrays and I can't figure out what that is. Taking it out brakes it so I left it in. The main issues I have revolve around the saveAs part not working.
Using run("Export Results") works but I need to manually pick a location and file name. I tried to set this up to take the file name and rename it to the generic image name so it can save a CSV using that name.
Any help pointing out why I'm a moron? I would also love to only open one file at a time (this opens them all) and close it when the analysis is complete. But I will settle for that happening on a different day if I can just manage to save the damn CSV automatically.
For the most part, I broke the code a whole bunch of times but it's in a working condition like this.
I appreciate any and all help. Thank you!
Related
I have a LARGE number of bookmarks and wanted to export them and share them with a group I work with. The issue is that when I export them, there are ADD_DATE and LAST_MODIFIED fields added by the browser (Firefox). I was hoping to just use cut or awk to pull the fields I want but the lack of a space before the >(website_name) is making that difficult. And my regex skills are weak.
How do I add a single space before the second to last > at the end of the line so that I can use cut or awk to pull out the fields I want into a new file?
Ex: 123456">SecurityTrails would become 123456 >SecurityTrails
Please see below for examples of what I'm working with. Any help is greatly appreciated!
<DT>SecurityTrails
i use firefox myself. it frequently also embeds favicon into the exported bookmarks.html file via base64 encoding. so to account for the different scenarios (than just the one mentioned by OP), maybe something like
{mawk/mawk2/gawk} 'BEGIN { FS = "\042" } $1 = $1'
then do whatever cutting that you want. That's just assuming OP wanted to keep every bit of it, and simply remove the quotations.
Now, if the objective is just to take out URL+Name of it,
{mawk/mawk2/gawk} 'BEGIN { DBLQT="\042"; FS = "(<A HREF=" DBLQT "|>)" } /<A HREF=/ {
url = substr($2, 1, index($2, DBLQT) - 1);
sitename = $(NF-1);
sub(/<\/A$/, "", sitename) ;
print url " > " sitename ; }' # or whatever way you want the output to be
I just typed it in extra verbosity to show what \042 meant - the ascii octal for double quote.
I want to merge two sets of fluorescence microscope images into a green & blue image, but I'm having trouble with the macro (haven't used ImageJ before). I have a folder of FITC-images to be coloured green and a folder of DAPI-images to be coloured blue. I have been using this modified version of a macro I found online:
macro "batch_merge_channel"{
count = 1;
setBatchMode(true);
file1= getDirectory("Choose a Directory");
list1= getFileList(file1);
n1=lengthOf(list1);
file2= getDirectory("Choose a Directory");
list2= getFileList(file2);
n2=lengthOf(list2);
open(file1+list1[1]);
open(file2+list2[1]);
small = n1;
if(small<n2)
small = n2;
for(i=0;i<small;i++)
{
run("Merge Channels...", "c2="+list1[1]+ " c3="+list2[1]+ " keep");
name = substring(list1, 0, 13)+")_merge";
saveAs("tiff", "C:\\Merge\\"+name);
first += 2;
close();
setBatchMode(false);
}
This, however returns an error
x.tif is not a valid choice for "C2 (green):"
with x being the name of the first file in the first folder.
If I merge the images manually, two by two, there is no error. So I'm presuming the problem is in the macro code.
I found several cases of this error online, but none of the solutions that seemed to work for those people work for me.
Any help would be appreciated!
In case you didn't solve this already, a great place to get help on ImageJ questions is the forum.
I can suggest a couple of ideas:
Is your image successfully opened by the macro? You could set the batch mode to false to check this.
It looks to me like the for loop does not employ the variable i. It works on the first pair of
images (list1[1], list2[1]), then closes the merged image, but then
tries to process image 1 again. To actually loop through all the
images in the folder, you have to put inside the loop something
like this (you don't need 'keep' -- better to leave it out so the source images will automatically be closed)
open(file1+list1[i]);
open(file2+list2[i]);
run("Merge Channels...", "c2="+list1[i]+ " c3="+list2[i]);
-- Turning off batch mode should be done after the loop, not within the loop.
Here's a version that works for me.
// #File(label = "Green images", style = "directory") file1
// #File(label = "Blue images", style = "directory") file2
// #File(label = "Output directory", style = "directory") output
// Do not delete or move the top 3 lines! They contain essential parameters
setBatchMode(true);
list1= getFileList(file1);
n1=lengthOf(list1);
print("n1 = ",n1);
list2= getFileList(file2);
n2=lengthOf(list2);
small = n1;
if(small<n2)
small = n2;
for(i=0;i<small;i++)
{
image1=list1[i];
image2=list2[i];
open(file1+File.separator+list1[i]);
open(file2+File.separator+list2[i]);
print("processing image",i);
run("Merge Channels...", "c2=&image1 c3=&image2");
name = substring(image1, 0, 13)+"_merge";
saveAs("tiff", output+File.separator+name);
close();
}
setBatchMode(false);
Hope this helps.
i have two volumes (.nrrd) of different qualities. the user can browse through the layers. if a key is pressed
i want to load the slice of the volume with better quality.
my volume is similar to this one: lesson 10 xtk
i've found:
volume.children[2].children[0].children[0].texture.file = "http://path/to/file.ext";
but if i apply some kind of file (.jpg, .dcm) nothing happens.
is this the right approach to change the slice to go inside the children and change the texture?
or shall i load the selected slice seperate as an object and apply it to the "lower-quality-volume" somehow?
edit:
this is what i tried so far (i get errors with dcms but not with jpgs):
if (event.keyCode == 83) { // "s"-button
volume.children[2].children[0].children[0].texture.file = "http://localhost:3000/112.jpg";
volume.children[2].children[0].children[0].modified();
r.render();
}
edit2: this is whats in my r.onShowtime = function() {}
volume.children[2].children[0].texture.file = 'http://localhost:3000/112.jpg';
volume.children[2].children[0].visible = true; // to activate the first layer
volume.children[2].children[0].modified();
console.log(volume.children[2].children[0].visible +" "+ volume.children[2].children[0].texture.file);
it outputs "true hostname/112.jpg"
when i inspect the .jpg in firebug the header is ok but the answer is "null"
when i inspect console.log(volume.children[2].children[0]); with firebug
.texture.file is set to hostname/112.jpg
when i go to "network" the .jpg has been transfered successfully
please notice that 112.jpg and level.jpg are the same. the first one is getting loaded in r.onShowtime and the other one is loaded at a keypressed event.
EDIT 3: volume.children[2].children[0] is of the type "X.slice", isn't it?
here is my approach: jsFiddle
and this is my actual issue and still not working: jsFiddle
Mhh..
I think a call to object.modified() is missing in the file setter (and in others setters from inject classes). Let's see when Haehn will come if he wants to change something internaly, but for the moment could you try to call it by yourself ?
You can try to add after the modification of texture :
volume.children[2].children[0].children[0].modified();
And if it doesn't work, in addition :
renderer.render();
Edit :
It's strange, I did a similar code and it did something. Can you please try something like that with opening your javascript console (Firefox, Chrome,... has one) and tell me the error you get ?
renderer.onShowtime = {
for (var i=0 ; i< volume.children[2].children.length ; i++) {
volume.children[2].children[i].texture.file="myimage.jpeg";
volume.children[2].children[i].modified();
}
}
It is important you call it in the onShowtime, because before the volume is not loaded, and so slicesX, slicesY... don't exist.
Edit2 :
Hey,
Thanks to the informations you added I think I've got the point ! In the render() method of our renderer3D there is a test on texture._dirty flag, that you cannot change from outside the framework. In addition the 1st rendering with a texture make that flag false, and loading a new texture doesn't seem to set that flag back to true in the current XTK. So, I think, we have to add it in the loader.load(texture, object) method. I'll make an issue on Github and see what Haehn thinks of it !
First, apologies as I realize this is only tangentially related to parser programming.
I've spend hours looking for a text file containing something like the following but with hundreds (hopefully thousands) of sub-entries. A complete biological classification file would be perfect. A massive version of the following would be great as my parser parses simple tabbed files:
TL,DR - I need a massive single-file hierarchical data set something like the following:
Kindoms
Monera
Protista
Fungi
Plants
Animals
Porifera
Sponges
Coelenterates
Hydra
Coral
Jellyfish
Platyhelminthes
Flatworms
Flukes
Nematodes
Roundworms
Tapeworms
Chordates
Urochordataes
Cephalochordates
Vertebrates
Fish
Amphibians
Reptiles
Birds
Mammals
The best I've been able to find are tree-of-life images (from which I transcribed the sample data set above). A single file with a TON of real data would be awesome. It doesn't have to be a biological classification data set, but I would really like the data to reflect something in the real-world. (My parser feeds a menu - would be great if the remainder of my testing was with a data set that actually meant something!) Even if the file is not tabbed but the data was fairly easily regex'ed to a tabbed format... that would be great.
Any ideas? Thanks!
It is possible that the xml layout was changed since the last answer but the code submitted above is no longer accurate. The resulting dump is extraneous. Some of the nodes have aliases (denoted as 'othername') that are reported as distinct nodes themselves.
I used the script below to generate the correct dump.
<?php
$reader = new XMLReader();
$reader->open('http://tolweb.org/onlinecontributors/app?service=external&page=xml/TreeStructureService&node_id=1'); //15963 is the primates index
$set=-1;
while ($reader->read()) {
switch ($reader->nodeType) {
case (XMLREADER::ELEMENT):
if ($reader->name == "OTHERNAMES"){
$set=1;
}
if ($reader->name == "NODES"){
$set=-1;
}
if ($reader->name == "NODE"){
$set=-1;
}
if ($reader->name == "NAME" AND $set == -1){
echo str_repeat("\t", $reader->depth - 2); //repeat tabs for depth
$node = $reader->expand();
echo $node->textContent . "\n";
}
break;
}
}
?>
This turned out to be such a pain in the ass. I finally tracked down a data feed from "The Tree of Life Web Project" at tolweb.org. I made the php script below to provide the basic functionality my post was looking for.
Change the node_id to have it print a tabbed representation of any of tolweb.org's data - just take the id from the page you're browsing on their site and change the node_id below.
Be aware though - their data feeds serve up large files, so definitely download the file to your own server (and change the "open" method below to point to the local file) if you're going to hit it more than once or twice.
More info on tolweb.org data feeds can be found here:
http://tolweb.org/tree/home.pages/downloadtree.html
<?php
$reader = new XMLReader();
$reader->open('http://tolweb.org/onlinecontributors/app?service=external&page=xml/TreeStructureService&node_id=15963'); //15963 is the primates index
while ($reader->read()) {
switch ($reader->nodeType) {
case (XMLREADER::ELEMENT):
if ($reader->name == "NAME"){
echo str_repeat("\t", $reader->depth - 2); //repeat tabs for depth
$node = $reader->expand();
echo $node->textContent . "\n";
}
break;
}
}
?>
I have a database file that I beleive was created with Clipper but can't say for sure (I have .ntx files for indexes which I understand is what Clipper uses). I am trying to create a C# application that will read this database using the System.Data.OleDB namespace.
For the most part I can sucessfully read the contents of the tables there is one field that I cannot. This field called CTRLNUMS that is defined as a CHAR(750). I have read various articles found through Google searches that suggest field larger than 255 chars have to be read through a different process than the normal assignment to a string variable. So far I have not been successful in an approach that I have found.
The following is a sample code snippet I am using to read the table and includes two options I used to read the CTRLNUMS field. Both options resulted in 238 characters being returned even though there is 750 characters stored in the field.
Here is my connection string:
Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\datadir;Extended Properties=DBASE IV;
Can anyone tell me the secret to reading larger fields from a DBF file?
using (OleDbConnection conn = new OleDbConnection(connectionString))
{
conn.Open();
using (OleDbCommand cmd = new OleDbCommand())
{
cmd.Connection = conn;
cmd.CommandType = CommandType.Text;
cmd.CommandText = string.Format("SELECT ITEM,CTRLNUMS FROM STUFF WHERE ITEM = '{0}'", stuffId);
using (OleDbDataReader dr = cmd.ExecuteReader())
{
if (dr.Read())
{
stuff.StuffId = dr["ITEM"].ToString();
// OPTION 1
string ctrlNums = dr["CTRLNUMS"].ToString();
// OPTION 2
char[] buffer = new char[750];
int index = 0;
int readSize = 5;
while (index < 750)
{
long charsRead = dr.GetChars(dr.GetOrdinal("CTRLNUMS"), index, buffer, index, readSize);
index += (int)charsRead;
if (charsRead < readSize)
{
break;
}
}
}
}
}
}
You can find a description of the DBF structure here: http://www.dbf2002.com/dbf-file-format.html
What I think Clipper used to do was modify the Field structure so that, in Character fields, the Decimal Places held the high-order byte of the size, so Character field sizes were really 256*Decimals+Size.
I may have a C# class that reads dbfs (natively, not ADO/DAO), it could be modified to handle this case. Let me know if you're interested.
Are you still looking for an answer? Is this a one-off job or something that needs doing regularly?
I have a Python module that is primarily intended to extract data from all kinds of DBF files ... it doesn't yet handle the length_high_byte = decimal_places hack, but it's a trivial change. I'd be quite happy to (a) share this with you and/or (b) get a copy of such a DBF file for testing.
Added later: Extended-length feature added, and tested against files I've created myself. Offer to share code with anyone who would like to test it still stands. Still interested in getting some "real" files myself for testing.
3 suggestions that might be worth a shot...
1 - use Access to create a linked table to the DBF file, then use .Net to hit the table in the access database instead of going direct to the DBF.
2 - try the FoxPro OLEDB provider
3 - parse the DBF file by hand. Example is here.
My guess is that #1 should work the easiest, and #3 will give you the opportunity to fine tune your cussing skills. :)