Store Terminal Output in a File ros melodic cpp - ros

I have a pcd file with x y z coordinates from point cloud.
Now I have another cpp file from where I print x y z coordinates on terminal. ( This is just the coordinates not a point cloud)
I want to store this in another file in order to compare it with pcd file.
How do I do it?

Why do you need to store it directly from stdout? There are a couple of different ways to go about this that are probably easier.
You can simply publish the (x,y,z) data and record it with rosbag record and then export via rosbag_to_csv.
You could also just write the values to a file directly in the code instead of printing it out. Since you did not specify Python or C++ here is a quick example in Python
f = open('your_output_file.csv', 'a')
while not rospy.is_shutdown():
#Whatever ops to get data
x,y,z = get_the_data()
output_str = str(x)+','+str(y)+','+str(z)
f.write(output_str)
ROS will also automatically log output from rospy.log*() functions. You can control where this is stored by exporting the environmental variable ROS_LOG_DIR. Note that this may not work 100% correctly for print() statements
Finally, if you really really need to use stdout for some reason you can always redirect the output from however you're launching the node. Ex: roslaunch your_package your_launch.launch >> some_file.txt

Related

How to use Tensorflow inference models to generate deepdream like images

I am using a custom image set to train a neural network using Tensorflow API. After successful training process I get these checkpoint files containing values of different training var. I now want to get an inference model from these checkpoint files, I found this script which does that, which I can then use to generate deepdream images as explained in this tutorial. The problem is when I load my model using:
import tensorflow as tf
model_fn = 'export'
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input')
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
I get this error:
graph_def.ParseFromString(f.read())
self.MergeFromString(serialized)
raise message_mod.DecodeError('Unexpected end-group tag.')
google.protobuf.message.DecodeError: Unexpected end-group tag.
The script expect a protocol buffer file, I am not sure the script I am using to generate inference models is giving me proto buffer files or not.
Can someone please suggest what am I doing wrong, or is there a better way to achieve this. I simply want to convert checkpoint files generated by tensor to proto buffer.
Thanks
The link to the script you ran is broken, but in any case the recommended thing is not to try to generate an inference model from a checkpoint, but rather to embed code at the end of your training program that will emit a "SavedModel" export (which is not the same thing as a checkpoint).
Please see [1], and in particular the heading "Building a Saved Model". Note that a Saved Model constitutes multiple files, one of which is indeed a protocol buffer (which directly answers your question I hope); the others are variable files and (optional) asset files.
[1] https://www.tensorflow.org/programmers_guide/saved_model

How can I analyse the data from UPPAAL simulator eps file?

I ran the automaton and exported the eps file. But how do I further analyse and get information from eps file?
Is it possible to write variable values to an external log file?
Thanks in advance
There is no point analysing eps (these are meant to be included into report).
For data analysis try the following in the verifier using simulate (Uppaal 4.1) queries like this:
simulate 1 [<=300] {
(T(1).Ready+2*T(1).Computing+3*T(1).Release+4*T(1).Error)+8,
(T(2).Ready+2*T(2).Computing+3*T(2).Release+4*T(2).Error)+4,
(T(3).Ready+2*T(3).Computing+3*T(3).Release+4*T(3).Error)+0
}
where T(i) is a process and Ready, Computing, Release and Error are its locations. Then model-check the query, right-click on it and see the plot, then:
a) right-click on the plot and choose Export -> comma separated values,
or:
b) parse the plot values from standard output from verifyta (command line tool) when verifying the query above.

what is the definition of findind Global and Local maxima in a image

I need to find Global and Local Maxima in an image to perform the next operatio (GM-AverageLocalMaxima)^2. I am using Opencv 3, bu ti think my real problem is that I don't understand the concept of Global and Local Maxima.

Delete variables based on the number of observations

I have an SPSS file that contains about 1000 variables and I have to delete the ones having 0 valid values. I can think of a loop with an if statement but I can't find how to write it.
The simplest way would be to use the spssaux2.FindEmptyVars Python function like this:
begin program.
import spssaux2
spssaux2.FindEmptyVars(delete=True)
end program.
If you don't already have the spssaux2 module installed, you would need to get it from the SPSS Community website or the IBM Predictive Analytics site and save it in the python\lib\site-packages directory under your Statistics installation.
Otherwise, the VALIDATEDATA command, if you have it, will identify the variables violating such rules as maximum percentage of missing values, but you would have to turn that output into a DELETE VARIABLES command. You could also look for variables with zero missing values using, say, DESCRIPTIVES and select out the ones with N=0.
If you've never worked with python in SPSS, here's a way to get the job done without it (not as elegant, but should do the job):
This will count the valid cases in each variable, and select only those that have 0 valid cases. Then you'll manually copy the names of these variables into a syntax command that will delete them.
DATASET NAME Orig.
DATASET DECLARE VARLIST.
AGGREGATE /OUTFILE='VARLIST'/BREAK=
/**list_all_the_variable_names_here = NU(*FirstVarName to *LastVarName).
DATASET ACTIVATE VARLIST.
VARSTOCASES /MAKE NumValid FROM *FirstVarName to *LastVarName/INDEX=VarName(NumValid).
SELECT IF NumValid=0.
EXECUTE.
Pause here to copy the remaining names in the list and complete the syntax, then continue:
DATASET ACTIVATE Orig.
DELETE VARIABLES *paste_here_all_the_remaining_variable_names_from_varlist .
Notes:
* I put stars where you have to replace my text with your variable names.
** If the variables are neatly named like Q1, Q2, Q3 .... Q1000, you can use the "FirstVarName to LastVarName" form (Q1 to Q1000) instead of listing all the variable names.
BTW it is of course possible to do this completely automatically without manually copying those names (using only syntax, no Python), but the added complexity is not worth bothering with for a single use...

retrieve sequence alignment score produced by emboss in biopython

I'm trying to retrieve the alignment score of two sequences compared using emboss in biopython. The only way that I know is to retrieve it from an output text file produced by emboss. The problem is that there will be hundreds of these files to iterate over. Is there an easier/cleaner method to retrieve the alignment score, without resorting to that? This is the main part of the code that I'm using.
From Bio.Emboss.Applications import StretcherCommandline
needle_cline = StretcherCommandline(asequence=,bsequence=,gapopen=,gapextend=,outfile=)
stdout, stderr = needle_cline()
I had the same problem and after some time spent on searching for a neat solution I popped up a white flag.
However, to speed up significantly the processing of output files I did the following things:
1) I used re python module for handling regular expressions to extract all data needed.
2) I created a ramdisk space for the output files. The use of a ramdisk here allowed for processing and exchanging all the data in RAM memory (much faster than writing and reading the output files from a hard drive, not to mention it saves your hdd in case of processing massive number of alignments).
I don't know if there is one specifically for your command.
For Primer3CommandLine, there is Primer3. Make your life much easier with something like:
from Bio.Emboss import Primer3
inputFile = "./wherever/your/outputfileis.out"
with open(inputFile) as fileHandle:
record = Primer3.parse(fileHandle)
# XXX check is len>0
primers = record.next().primers
numPrimers = len(primers)
# you should have access to each primer, using a for loop
# to check how to access the data you care about. For example:
I would also check http://biopython.org/wiki/SeqIO#Sequence_Input

Resources