idl histogram with y-axis in logarithmic scale - histogram

Is it possible to make in idl an histogram with logarithmic scale in the y-axis? I know it is possible to make logarithmic bins, but I'd like to have log scale in y-axis and linear in x-axis. Gnuplot can do it but it is not very rich in options.

Related

the histograms in Joint Plot

I want to plot a Scatterplot using joint plot where I am using Hue. But the KDE plots on the axes are not desired. Instead of those, I need to plot the histogram containing the aggregate count in the range of specific x and y. How can I do that?
In the above Image, instead of these kde plots, I need histogram plotted for aggregate number of count of points in that x and y range

How find the peek value of a image plot? (Plot Digitizer)

I want to extract the peek value from a plot automatically.
I searched web plot digitizer and other programs and packages, however none of them gives points on the plot automatically. Is there any way to achieve this by using image processing such as CNN ?
I am thinking to make custom filters to find peek point.
Thanks in advance.
Sample plot
Algorithm
convert to gray-scale and binarize
find coorditates of a white pixel (x,y) where y is minimal nonzero values
add to y the blob radius y=y+r
make the scale transformation from range [0,image_height] to your range [0,25]
calculate new value of y under the transformation

Relationship of standard deviation for Gaussian filter between pixel domain and the real world

I constructed an experiment with Gaussian blur in real world and MR images. I printed some test images blurred and compare augmented images blurred too.
What is the best way to express how much blurring I applied in real-world coordinates?
The image is 2560x1440 pixels, corresponding to 533x300 cm in the real world. If this image is blurred with a Gaussian with standard deviation n (filter size is ceil(3 * n) * 2 + 1), how can this be expressed in centimeters? Is it reasonable to express it as the real size of the filter in centimeters?
In short, yes, it is perfectly reasonable to express the size of the kernel in real-world coordinates.
In your case, you have 533 cm == 2560 pixels horizontally, which is 0.2082 cm per pixel. (Please edit if the question has a mistake and this should be mm instead of cm.) Vertically you have approximately the same, so we can assume isotropic sampling and leave it at 0.208 cm/px.
Given that pixel size, a standard deviation of the Gaussian of n is equivalent to a standard deviation of 0.208*n cm in the real world.

Calculate the unit gradient vector

I have a problem to calculate the unit gradient vector. I have a formula but i didn't understand. If possible, would you be able to explain this formula in more detail. I must implement an image for eye center localization. Thank you for your interest.
UGVs formula
Gradient vector calculation will give you the magnitude and orientation for each pixel in the image. This means you need to calculate the derivatives along the x-axis and y-axis separately. Then fuse them to calculate magnitude and direction of vectors. If you are using OpenCV or MATLAB, you will see functions to calculate gradient magnitude and direction of pixels in an image. For example for MATLAB, see imgradient amd imgradientxy functions.

x-IMU Hard-iron calibration Values too big for Register

I am using a x-IMU from x-io Technologies.
For drift correction it uses the AHRS On-board algorithm.
Without Hard-iron calibration there is a small continuous rotation.
From Hard-iron calibration with x-IMU-GUI-v13.1 I get values like:
x-axis hard-iron bias: 882076,942002059
y-axis hard-iron bias: -814599,840421389
z-axis hard-iron bias: 834205,266804569
They are automatically set to the hard-iron bias registers.
This registers have values between -16 & 15.99951.
Which leads to following register values:
x-axis hard-iron bias: 15.99951
y-axis hard-iron bias: -16
z-axis hard-iron bias: 15.99951
If I rotate the IMU horizontal, I get following values on Magnetometer y-axis:
y-axis
But all values above 16 are cut off...
There is the same problem at x-axis.
So, where is the problem?
At the Hard-iron calibration or at the magnetometer register settings?
Thanks a lot for answering!
The Customer Service of x-io Technologies Limited could help me.
I had to change my Windows regional settings such that a decimal place is represented by a '.' and not a ','.
Now the calibration dataset is valid.

Resources