Where can I find the file to set the distance for the urg_node? - ros

I would like to set the distance at which urg_node is to be detected.
I changed lines 88 and 89 of the launch file, but the detection distance did not change
88 param name="range_min" type="double" value="0.01"
89 param name="range_max" type="double" value="50.0"
source code
https://github.com/vstoneofficial/mecanumrover_samples
mecanumrover_samples/launch/gmapping.launch

I'm not familiar with this exact lidar but in urg_node documentation (http://wiki.ros.org/urg_node) there's no mention of parameters for min max detection range and I didn't find any implementation of it in the source code. So it's not possible to do that directly without modifying source code of urg_node_driver.cpp. But there is a way
ROS way to do it would be to pass lidar data through filter node and use the filtered output. To do so can use LaserScanRangeFilter filter config for laser_filters package:
And if you use filters you'll need to remap http://wiki.ros.org/roslaunch/XML/remap some topic names to have correct data flow lidar->filter->gmapping/etc
PS lines 88 89 that you edited are meant for ydlidar_ros_driver I assume you're are not using ydlidar so that's not what you want

Related

Does ```/gun/polarization 0. 0. -1.``` in histo.mac of the Pol01 example represent the photon spin?

On page 348 of the geant4 User’s guide and applications manual (refer to following link)
http://ftp.tku.edu.tw/Linux/Gentoo/distfiles/BookForAppliDev-4.10.03.pdf
it states that
"Pol01 - interaction of polarized beam (e.g. circularly polarized photons) with polarized target"
On lines 25 and 26 in the histo.mac file of the Pol01 example it has the following two lines of instructions...
/gun/polarization 0. 0. -1.
/gun/particle gamma
The direction of this gamma beam is along the z-axis, and so, assuming the code is correct, the first line of code cannot be describing the polarization state of the electric field. Am I to take it then that, in this context, the first line defines the photon spin projection, and therefore it is defining a circularly polarized photon, either left or right depending on which convention geant4 uses?
Reading the code, Geant4 treats the three vector you use in /gun/polarization for a photon as the S1,S2,S3 components of a Stokes vector in calculations described in the Wikipedia article below.
https://en.wikipedia.org/wiki/Stokes_parameters
A (0,0,1) vector will represent circularly polarized light.

image preprocessing methods that can be used for identification of industrial parts name (stuck or engraved) on the surface?

I am working on a project where my task is to identify machine part by its part number written on label attached to it or engraved on its surface. One such example of label and engraved part is shown in below figures.
My task is to recognise 9 or 10 alphanumerical number (03C 997 032 D in 1st image and 357 955 531 in 2nd image). This seems to be easy task however I am facing problem in distinguishing between useful information in the image and rest of the part i.e. there are many other numbers and characters in both image and I want to focus on only mentioned numbers. I tried many things but no success as of now. Does anyone know the image pre processing methods or any ML/DL model which I should apply to get desired result?
Thanks in advance!
JD
You can use OCR to the get all characters from the image and then use regular expressions to extract the desired patterns.
You can use OCR method, like Tesseract.
Maybe, you want to clean the images before running the text-recognition system, by performing some filtering to remove noise / remove extra information, such as:
Convert to gray scale (colors are not relevant, aren't them?)
Crop to region of interest
Canny Filter
A good start can be one of this tutorial:
OpenCV OCR with Tesseract (Python API)
Recognizing text/number with OpenCV (C++ API)

Why does ELKI need db.in file in addition to distance matrix? Also what should db.in file contain?

I tried to follow this tutorial on using ELKI with pre-computed distances for clustering.
http://elki.dbs.ifi.lmu.de/wiki/HowTo/PrecomputedDistances
I used the following set of command line options:
-dbc.filter FixedDBIDsFilter -dbc.startid 0 -algorithm clustering.OPTICS
-algorithm.distancefunction external.FileBasedDoubleDistanceFunction
-distance.matrix /path/to/matrix -optics.minpts 5 -resulthandler ResultWriter
ELkI fails with a configuration error saying db.in file is needed to make the computation.
The following configuration errors prevented execution:
No value given for parameter "dbc.in":
Expected: The name of the input file to be parsed.
No value given for parameter "parser.distancefunction":
Expected: Distance function used for parsing values.
My question is what is db.in file? Why should I provide it in addition to the distance matrix file since the pair-wise distance matrix file completely specifies all the information about the point cloud. (also I don't have access to any other information other than the pair-wise distance information).
What should I do about db.in? Should I override it, or specify some dummy information etc. Kindly help me understand.
thank you.
This is documented in the ELKI HowTos:
http://elki.dbs.ifi.lmu.de/wiki/HowTo/PrecomputedDistances
Using without primary data
-dbc DBIDRangeDatabaseConnection -idgen.count 100
However, there is a bug (patch is on the howto page, and will be in the next release) so you right now can't fully use this; as a workaround you can use a text file that enumerates the objects.
The reason for this is that ELKI is designed to work on multi-relational data. It's not just processing matrixes. But some algorithms may e.g. need a geographic representation of an object, some measurements for this object, and a label for evaluation. That is three relations.
What the DBIDRange data source essentially does is create a single "fake" relation that is just the DBIDs 0 to 99. On algorithms that don't need actual data, but only distances (e.g. LOF or DBSCAN or OPTICS), it is sufficient to have object IDs and a distance matrix.

Please help on using SPSS to add scales of Likert-type

Since the last post is closed due to unclear expression, here is a edited one.
There are in total 20 items from 5 Likert-type scale questions from a questionnaire. I need to add the 20 items from 5 separate questions to create a total scale. I already got the data.
The question is just like the picture above. How can I run the command to add the 20 items from 5 separate questions? What is the command?
Is it something like Transform > Compute variable. Enter a variable name, specify which items to add up, and hey presto (e.g. "V1+V2+V3" etc)?
You can do exactly as you suggested, using the Transform -> Compute variable... function. Simply type in the name of your new scale in the Target variable box and the addition you want in the Numeric variable box.
You will see that the following SPSS syntax command is run:
COMPUTE total=v1 + v2 + v3 + v4.
EXECUTE.
If any of the variables has a missing value, the simply adding them will result in a missing value as well. If you don't want to impute for missing values, using the MEAN command in syntax works well. Also, if the variables are contiguous in the data file, you can make the syntax much more readable by using the TO modifier.
COMPUTE myscore=MEAN(variable1 TO variable5)*5.
The resulting value provides an efficient expected value.
However, it seems like the problem in this case is that the data entry process has dummy coded all of the items, producing 20 separate variables instead of 5, where each block of 4 variables has a value of 0 or 1 but represents the values 1 to 4. In this case, you can use the following syntax:
COMPUTE mycounter=1.
COMPUTE myscore=0.
EXECUTE.
DO REPEAT a=variable1 TO variable20.
COMPUTE myscore=myscore+mycounter*a.
COMPUTE mycounter=mycounter+1.
IF (mycounter=5) mycounter=1.
END REPEAT.
EXECUTE.
Note that the variables from variable1 to variable20 must have each set of dummy codes from the original items clustered together in ascending order.

How to fill in the 'holes' in an irregular spaced grid or array having missing data?

Does anyone have a straight forward Delphi example of filling in a grid using Delaunay
Triangles or kriging? Either method can fill a grid by 'interpolating.'
What do I want to do? I have a grid, similar to:
22 23 xx 17 19 18 05
21 xx xx xx 17 18 07
22 24 xx xx 18 21 20
30 22 25 xx 22 20 19
28 xx 23 24 22 20 18
22 23 xx 17 23 15 08
21 29 30 22 22 17 09
where the xx's represent grids cells with no data and the x,y coordinates of each cell is
known. Both kriging and Delaunay Triangles can supply the 'missing' points (which of course, are fictitious, but reasonable values).
Kriging is a statistical method to fill in 'missing' or unavailable data in
a grid with 'reasonable' values. Why would you need it? Principly to 'contour' the
data. Contouring algorithms (like CONREC for Delphi http://local.wasp.uwa.edu.au/~pbourke/papers/conrec/index.html) can contour regularly spaced data. Google around for 'kriging' and 'Delphi' and you eventually are pointed to the GEOBLOCK project on Source Forge (http://geoblock.sourceforge.net/ ). Geoblock has numerous Delphi pas units for kriging based on GSLIB (a Fortran statistical package developed at Stanford). However all the kriging/delauney units are dependent on units refered to in the Delphi uses clause. Unfortunately, these 'helper' units are not posted with the rest of the source code. It appears none of the kriging units can stand alone or work without helper units that are not posted or in some cases, undefined data types.
Delaunay triangulation is described at
http://local.wasp.uwa.edu.au/~pbourke/papers/triangulate/index.html. Posted is
a Delphi example, pretty neat, that generates 'triangles.' Unfortunately, I
haven't a clue how to use the unit with a static grid. The example 'generates' a data field on the fly.
Has anyone got either of these units to work to fill an irregular data grid? Any code or hints how to use the existing code for kriging a simple grid or using Delaunay to fill in the holes would be appreciated.
I'm writing this as an answer because it's too long to fit into a comment.
Assuming your grid really is irregular (you give no examples of a typical pattern of grid coordinates), then triangulation only partially helps. Once you have triangulated you would then use that triangulation to do an interpolation, and there are different choices that could be made.
But you've not said anything about how you want to interpolate, what you want to do with that interpolation.
It seems to me that you have asked for some code, but it's not clear that you know what algorithm you want. That's really the question you should have asked.
For example since you appear to have no criteria for how you should do the interpolation, why don't you choose the nearest neighbour for your missing values. Or why don't you use the overall mean for the missing values. Both of these choices meet all the criteria you have specified since you haven't specified any!
Really I think you need to spend some more time explaining what properties you want this interpolation to have, what you are going to do with it etc. I also think you should stop thinking about code for now and think about algorithms. Since you have mentioned statistics you should consider asking at https://stats.stackexchange.com/.
Code posted by Richard Winston on the Embarcadero Developer Newwork Code Central titled
Delaunay triangulation and contouring code
( ID: 29365 ) demonstrates routines for generating constrained Delaunay triangulations and for plotting contour lines based on data points at arbitrary locations. Richard's code . These algorithms do not manipulate and fill-in the holes in a grid. They do provide a method for contouring arbitrary data and do not require a grid without missing values.
I still have not found an acceptable krieging algorithm in Pascal to actually fill-in the holesin a grid.

Resources