What does 'ZScaleInterval()' do in AstroPy? - image-processing

I have looked at the documentation but I didn't find what I was looking for. All the explanations I could find on the web states that this is used to improve contrasts in images.
Look at this code for example(that is made to run on an astronomical FITS Image):
from astropy.visualization import ZScaleInterval
z = ZScaleInterval()
z1,z2 = z.get_limits(image_data)
plt.figure()
plt.imshow(image_data, vmin=z1, vmax=z2)
According to the documentation, get_limits returns the minimum and maximum value in the interval based on the values provided. I'm guessing it means the maximum and minimum intensities. What do vmax and vmin do?

From my understanding vmin and vmax are the lower and upper limits of the gray values in your image.

Related

How does center_box parameter of sklearn.datasets.make_blob() work?

I was searching online about how center_box parameter works in sklearn.datasets.make_blobs(). However, I could not find any good answer about it.
How does this parameter affect the sample dataset generation?
From the documentation:
center_box: tuple of float (min, max), default=(-10.0, 10.0)
The bounding box for each cluster center when centers are generated at random.
This means that the parameter center_box is an area of how big a cluster will be.

Function to find common high or low prices for a number of candles

Normally we use the function (iHigh) or (iLow) to find the highest or lowest price of a certain candle.
My question is, is there a way or function i can use to find a common price shared by a number of candles..lets say 10 previous candles...and use the returned values to draw a line of support or resistance
You can find the highest high or the lowest low of a series of candles using iHighest() and iLowest(). An example would be:
int index=iHighest(NULL,0,MODE_HIGH,20,4);
double highest=iHigh(NULL,0,index);
This example would find the highest high of the candles from bar 4 to 23.

How to scale % change based features so that they are viewed "similarly" by the model

I have some features that are zero-centered values and supposed to represent change between a current value and previous value. Generally speaking i believe there should be some symmetry between these values. Ie. there should be roughly the same amount of positive values as negative values and roughly these values should operate on the same scale.
When i try to scale my samples using MaxAbsScaler, i notice that my negative values for this feature get almost completely drowned out by the positive values. And i don't really have any reason to believe my positive values should be that much larger than my negative values.
So what i've noticed is that fundamentally, the magnitude of percentage change values are not symmetrical in scale. For example if i have a value that goes from 50 to 200, that would result in a 300.0% change. If i have a value that goes from 200 to 50 that would result in a -75.0% change. I get there is a reason for this, but in terms of my feature, i don't see a reason why a change of 50 to 100 should be 3x+ more "important" than the same change in value but the opposite direction.
Given this information, i do not believe there would be any reason to want my model to treat a change of 200-50 as a "lesser" change than a change of 50-200. Since i am trying to represent the change of a value over time, i want to abstract this pattern so that my model can "visualize" the change of a value over time that same way a person would.
Right now i am solving this by using this formula
if curr > prev:
return curr / prev - 1
else:
return (prev / curr - 1) * -1
And this does seem to treat changes in value, similarly regardless of the direction. Ie from the example of above 50>200 = 300, 200>50 = -300. Is there a reason why i shouldn't be doing this? Does this accomplish my goal? Has anyone ran into similar dilemmas?
This is a discussion question and it's difficult to know the right answer to it without knowing the physical relevance of your feature. You are calculating a percentage change, and a percent change is dependent on the original value. I am not a big fan of a custom formula only to make percent change symmetric since it adds a layer of complexity when it is unnecessary in my opinion.
If you want change to be symmetric, you can try direct difference or factor change. There's nothing to suggest that difference or factor change are less correct than percent change. So, depending on the physical relevance of your feature, each of the following symmetric measures would be correct ways to measure change -
Difference change -> 50 to 200 yields 150, 200 to 50 yields -150
Factor change with logarithm -> 50 to 200 yields log(4), 200 to 50 yields log(1/4) = -log(4)
You're having trouble because you haven't brought the abstract questions into your paradigm.
"... my model can "visualize" ... same way a person would."
In this paradigm, you need a metric for "same way". There is no such empirical standard. You've dropped both of the simple standards -- relative error and absolute error -- and you posit some inherently "normal" standard that doesn't exist.
Yes, we run into these dilemmas: choosing a success metric. You've chosen a classic example from "How To Lie With Statistics"; depending on the choice of starting and finishing proportions and the error metric, you can "prove" all sorts of things.
This brings us to your central question:
Does this accomplish my goal?
We don't know. First of all, you haven't given us your actual goal. Rather, you've given us an indefinite description and a single example of two data points. Second, you're asking the wrong entity. Make your changes, run the model on your data set, and examine the properties of the resulting predictions. Do those properties satisfy your desired end result?
For instance, given your posted data points, (200, 50) and (50, 200), how would other examples fit in, such as (1, 4), (1000, 10), etc.? If you're simply training on the proportion of change over the full range of values involved in that transaction, your proposal is just what you need: use the higher value as the basis. Since you didn't post any representative data, we have no idea what sort of distribution you have.

Mathematical Operations on an Image Stack in ImageJ (Fiji)

I am writing an imageJ/Fiji plugin in Jython using the pydev plugin in eclipse.The plugin will be the ImageJ version of an already existing denoising software called CANDLE written as a matlab program. Changing the value of every pixel(voxel) of an image in matlab is trivial:
InputImage = 2 * sqrt(InputImage + (3/8));
Median3DFilteredImage = 2 * sqrt(Median3DFiltered + (3/8));
Here "InputImage" and "Median3DFilteredImage" are 3D Matrices, with the last dimension being time (slices). To reproduced the following operation on an ImageJ image, I had to employ two for loops, one to iterate through the image slices (3rd dimension) and the other loop to iterate over all the pixels in a particular slice:
medFiltStack = medianFilteredImage.getStack()
newMedFiltStack = ImageStack(medianFilteredImage.width, medianFilteredImage.height)
InputStack = InputImage.getStack()
newInputStack = ImageStack(InputImage.width, InputImage.height)
for i in xrange(1 , medianFilteredImage.getNSlices() + 1):
ip = medFiltStack.getProcessor(i).convertToFloat()
ip2 = InputStack.getProcessor(i).convertToFloat()
pixels = ip.getPixels()
pixels2 = ip2.getPixels()
for j in xrange (len(pixels)):
pixels[j] = 2 * javaMath.sqrt(pixels[j] + (3.0/8.0) )
pixels2[j] = 2 * javaMath.sqrt(pixels2[j] + (3.0/8.0) )
newMedFiltStack.addSlice(ip)
newInputStack.addSlice(ip2)
medianFilteredImage = ImagePlus("MedianFiltered-Image", newMedFiltStack)
InputImage = ImagePlus("Input-Image", newInputStack)
My question is as follows: Is there a way to perform mathematical operations on an image Stack, i.e. on every pixel (voxel) in the image stack, without having to write code that explicitly visits every pixel in every slice of the image, i.e. for loops. It just seems to be a very primitive way of going about it and I am wondering if there isn't an optimal way of doing this operation. I also had to work with copies and then gave the new images the same names as before as opposed to working with the original images and editing them directly. So is there a way to edit the pixel values of the original images rather than copies of the images? Any help would be appreciated as there are plenty of more math operations that I have to perform. It would be super useful to find a way to do mathematical operations on images in an optimal way both in terms of the amount of code and if possible, in terms of speed.
In pure ImageJ 1.x, the answer is: no, there's no other way than to visit every slice and get its ImageProcessor. That's the way how ImageJ1 deals with its limited number of dimensions (z, time, channel), you always have a (Hyper-)Stack of 2D planes.
There is however a more powerful way of dealing with n-dimensional images called ImgLib, which is included into Fiji together with ImageJ2.
To avoid re-inventing the wheel, you should have a look a Jean-Yves Tinevez's great plugin Image Expression Parser. Use it headlessly with Fiji, or just have look at its source code (it uses a previous version though, ImgLib1, but the idea is the same: you avoid hard-coding the dimensions by using Java generics), see e.g. for the sqrt function:
public final <R extends RealType<R>> float evaluate(final R alpha) {
return (float) Math.sqrt(alpha.getRealDouble());
}

selecting channels in calchist opencv

I have the code for computing the histogram for hsv and yuv images. As am trying to obtain values corresponding to brightness alone, I want the 'v' channel value from hsv image and luma ('y') channel value from yuv image. this is the code I have used.
int channels[] = {0};
calcHist(&src_yuv,1,channels,Mat(),hist,1,histSize,ranges,true,false);
This sample code is for yuv. I just change {0} to {2} to obtain 'v' channel values from HSV. I am getting certain results, but am not sure if am choosing the right channels. could you please help me, to know if those numbers choose the exact channels I want to? Thanks in advance
To be absolute sure that the channel number X corresponds to the channel you are after, consult the channelSeq attribute of the IPL Image structure. If channelSeq[X] gives the name (a character) of the channel you are after, then you found it.
But, given how this attribute is documented (along other interesting ones), even if you were always using IPLImage, there is no guarantee that the information contained there would be accurate. Thus, to be absolutely sure about the channel sequence in your image you have to trust the conversion specification and remember that yourself. So, if you start with an image in BGR and convert using BGR2YUV, then you trust that the Y channel is the first one, and so on. If OpenCV ever changes BGR2YUV to mean that Y goes to the last channel, and so on, then too bad for you.

Resources