ImageJ - Image to Stack in Batch - imagej

I have .tiff files which contain 25 sections of a stack each. Is there a way to use the "Image to Stack" command in batch? Each data set contains 60 tiffs for all three channels of color.
Thanks
Christine

The general way to discover how to do these things is to use the macro recorder, which you can find under Plugins > Macros > Record .... If you then go to File > Import > Image Sequence... and select the first file of the sequence as normal, you should see something like the following appear in the recorder:
run("Image Sequence...", "open=[/home/mark/a/1.tif] number=60 starting=1 increment=1 scale=100 file=[] or=[] sort");
To allow this to work for arbitrary numbers of slices (my example happened to have 60) just leave out the number=60 bit. So, for example, to convert this directory of files to a single file from the command-line you can do:
imagej -eval 'run("Image Sequence...", "open=[/home/mark/a/1.tif] starting=1 increment=1 scale=100 file=[] or=[] sort"); saveAs("Tiff", "/home/mark/stack.tif");' -batch

Related

how to grep between 2 txt files

I have 2 txt files
The 1) txt file is like this :
sequence_id description
Solyc01g005420.2.1 No description available
Solyc01g006950.3.1 "31.4 cell.vesicle transport Encodes a syntaxin localized at the plasma membrane (SYR1 Syntaxin Related Protein 1 also known as SYP121 PENETRATION1/PEN1). SYR1/PEN1 is a member of the SNARE superfamily proteins. SNARE proteins are involved in cell signaling vesicle traffic growth and development. SYR1/PEN1 functions in positioning anchoring of the KAT1 K+ channel protein at the plasma membrane. Transcription is upregulated by abscisic acid suggesting a role in ABA signaling. Also functions in non-host resistance against barley powdery mildew Blumeria graminis sp. hordei. SYR1/PEN1 is a nonessential component of the preinvasive resistance against Colletotrichum fungus. Required for mlo resistance. syntaxin of plants 121 (SYP121)"
Solyc01g007770.2.1 No description available
Solyc01g008560.3.1 No description available
Solyc01g068490.3.1 20.1 stress.biotic Encodes a protein containing a U-box and an ARM domain. senescence-associated E3 ubiquitin ligase 1 (SAUL1)
..
.
the 2nd txt file has the gene ids:
Solyc02g080050.2.1
Solyc09g083200.3.1
Solyc05g050380.3.1
Solyc09g011490.3.1
Solyc04g051490.3.1
Solyc08g006470.3.1
Solyc01g107810.3.1
Solyc03g095770.3.1
Solyc12g006370.2.1
Solyc03g033840.3.1
Solyc02g069250.3.1
Solyc02g077040.3.1
Solyc03g093890.3.1
..
.
.
Each txt has a lot more lines than the ones i show. I just wanted to know what grep command should i use that i only get the genes that are on the 2nd txt file, deduct from the 1st with the description next to it.
thanks

How can I generate a single .avro file for large flat file with 30MB+ data

currently two avro files are getting generated for 10 kb file, If I follow the same thing with my actual file (30MB+) I will n number of files.
so need a solution to generate only one or two .avro files even if the source file of large.
Also is there any way to avoid manual declaration of column names.
current approach...
spark-shell --packages com.databricks:spark-csv_2.10:1.5.0,com.databricks:spark-avro_2.10:2.0.1
import org.apache.spark.sql.types.{StructType, StructField, StringType}
// Manual schema declaration of the 'co' and 'id' column names and types
val customSchema = StructType(Array(
StructField("ind", StringType, true),
StructField("co", StringType, true)))
val df = sqlContext.read.format("com.databricks.spark.csv").option("comment", "\"").option("quote", "|").schema(customSchema).load("/tmp/file.txt")
df.write.format("com.databricks.spark.avro").save("/tmp/avroout")
// Note: /tmp/file.txt is input file/dir, and /tmp/avroout is the output dir
Try specifying number of partitions of your dataframe while writing the data as avro or any format. To fix this use repartition or coalesce df function.
df.coalesce(1).write.format("com.databricks.spark.avro").save("/tmp/avroout")
So that it writes only one file in "/tmp/avroout"
Hope this helps!

python-fu gimp get items list

How can I get the list of items in an image using python-fu? I've tried searching for a similar function in Python Procedure Browser and google but I couldn't find any. (My end goal is to select a text item on a layer and convert it to a path)
Edit - Using GIMP version 2.8.2 on macOS Mojave
You don't list "items" in general. You list layers, channels, or paths, either using the PDB:
for lid in pdb.gimp_image_get_layers(image)
for cid in pdb.gimp_image_get_channels(image)
for vid in pdb.gimp_image_get_vectors(image)
or the attributes of the image object:
for l in image.layers
for c in image.channels
for v in image.vectors
The PDB calls return integer item IDs (use gimp._id2drawable(id)/gimp._id2vectors(id) to get the objects), while the image attributes are lists of gimp.Layer/gimp.Channel/gimp.Vector objects (and are therefore much simpler to work with).
To tell if a layer is a text layer, you have to use a PDB call: pdb.gimp_item_is_text_layer(layer)
You can iterate the text layers thus
for textlayer in [l for l in image.layers if pdb.gimp_item_is_text_layer(l)]`
To get the path from a text layer:
path=pdb.gimp_vectors_new_from_text_layer(image,layer)
Many sample Python scripts here and some more specialized in paths here.

Abaqus read input file very slow for many materials

EDIT*: After all it turned out that this is not causing the slow import. Nevertheless the answer given explains a better way to implement different densities with one material. So I'll let the question exist. (Slow import was caused by running the scripts from the abaqus PDE and not using 'Run script' from the file menu. special thanks to droooze for finding the problem)
I'm trying to optimize the porosity distribution of a certain material. Therefor I'm performing abaqus FEA simulations with +-500 different materials in one part. The simulation itself only takes about 40 seconds, but reading the input file takes more than 3 minutes. (I used a python script to generate the inp file)
I'm using these commands to generate my materials in the input file:
*SOLID SECTION, ELSET = ES_Implant_MAT0 ,MATERIAL=Implant_material0
*ELSET, ELSET=ES_Implant_MAT336
6,52,356,376,782,1793,1954,1984,3072
*MATERIAL, NAME = Implant_material0
*DENSITY
4.43
*ELASTIC
110000, 0.3
Any idea why this is so slow and is there a more efficient way to do this to reduce the load input file time?
If your ~500 materials are all of the same kind (e.g. all linear elastic isotropic mass density), then you can collapse it all into one material then define a distribution table which distributes these materials directly onto the instance element label.
Syntax:
(somewhere in the Part definition, under section)
*SOLID SECTION, ELSET = ES_Implant_MAT0 ,MATERIAL=Implant_material0
(somewhere in the Assembly definition; part= should reference the name of the part above)
**
**
** ASSEMBLY
**
*Assembly, name=Assembly
**
*Instance, name=myinstance, part=mypart
*End Instance
**
*Elset, elset=ES_Implant_MAT0, instance=myinstance
1,2,...
(somewhere in the Materials definition; see Abaqus Keywords Reference Guide for the keywords *DISTRIBUTION TABLE and *DISTRIBUTION)
**
** MATERIALS
**
*DISTRIBUTION TABLE, NAME=IMPLANT_MATERIAL0_ELASTIC_TABLE
MODULUS,RATIO
*DISTRIBUTION, NAME=Implant_material0_elastic, LOCATION=element, TABLE=IMPLANT_MATERIAL0_ELASTIC_TABLE
,110000,0.3 # First line is some default value
myinstance.1,110000,0.3 # syntax: instance name [dot] instance element label
myinstance.2,110000,0.3 # these elements currently use the material properties assigned to `ELSET = ES_Implant_MAT0`. You can define the material properties belonging to other element sets in this same table, making sure you reference the element label correctly.
...
*DISTRIBUTION TABLE, NAME=IMPLANT_MATERIAL0_DENSITY_TABLE
DENSITY
*DISTRIBUTION, NAME=Implant_material0_density, LOCATION=element, TABLE=IMPLANT_MATERIAL0_DENSITY_TABLE
,4.43 # Default value
myinstance.1,4.43
myinstance.2,4.43
...
*Material, name=Implant_material0
*Elastic
Implant_material0_elastic # Distribution name
*Density
Implant_material0_density # Distribution name

Can I get a list of all currently-registered atoms?

My project has blown through the max 1M atoms, we've cranked up the limit, but I need to apply some sanity to the code that people are submitting with regard to list_to_atom and its friends. I'd like to start by getting a list of all the registered atoms so I can see where the largest offenders are. Is there any way to do this. I'll have to be creative about how I do it so I don't end up trying to dump 1-2M lines in a live console.
You can get hold of all atoms by using an undocumented feature of the external term format.
TL;DR: Paste the following line into the Erlang shell of your running node. Read on for explanation and a non-terse version of the code.
(fun F(N)->try binary_to_term(<<131,75,N:24>>) of A->[A]++F(N+1) catch error:badarg->[]end end)(0).
Elixir version by Ivar Vong:
for i <- 0..:erlang.system_info(:atom_count)-1, do: :erlang.binary_to_term(<<131,75,i::24>>)
An Erlang term encoded in the external term format starts with the byte 131, then a byte identifying the type, and then the actual data. I found that EEP-43 mentions all the possible types, including ATOM_INTERNAL_REF3 with type byte 75, which isn't mentioned in the official documentation of the external term format.
For ATOM_INTERNAL_REF3, the data is an index into the atom table, encoded as a 24-bit integer. We can easily create such a binary: <<131,75,N:24>>
For example, in my Erlang VM, false seems to be the zeroth atom in the atom table:
> binary_to_term(<<131,75,0:24>>).
false
There's no simple way to find the number of atoms currently in the atom table*, but we can keep increasing the number until we get a badarg error.
So this little module gives you a list of all atoms:
-module(all_atoms).
-export([all_atoms/0]).
atom_by_number(N) ->
binary_to_term(<<131,75,N:24>>).
all_atoms() ->
atoms_starting_at(0).
atoms_starting_at(N) ->
try atom_by_number(N) of
Atom ->
[Atom] ++ atoms_starting_at(N + 1)
catch
error:badarg ->
[]
end.
The output looks like:
> all_atoms:all_atoms().
[false,true,'_',nonode#nohost,'$end_of_table','','fun',
infinity,timeout,normal,call,return,throw,error,exit,
undefined,nocatch,undefined_function,undefined_lambda,
'DOWN','UP','EXIT',aborted,abs_path,absoluteURI,ac,accessor,
active,all|...]
> length(v(-1)).
9821
* In Erlang/OTP 20.0, you can call erlang:system_info(atom_count):
> length(all_atoms:all_atoms()) == erlang:system_info(atom_count).
true
I'm not sure if there's a way to do it on a live system, but if you can run it in a test environment you should be able to get a list via crash dump. The atom table is near the end of the crash dump format. You can create a crash dump via erlang:halt/1, but that will bring down the whole runtime system.
I dare say that if you use more than 1M atoms, then you are doing something wrong. Atoms are intended to be static as soon as the application runs or at least upper bounded by some small number, 3000 or so for a medium sized application.
Be very careful when an enemy can generate atoms in your vm. especially calls like list_to_atom/1 is somewhat dangerous.
EDITED (wrong answer..)
You can adjust number of atoms with +t
http://www.erlang.org/doc/efficiency_guide/advanced.html
..but I know very few use cases when it is necessary.
You can track atom stats with erlang:memory()

Resources