Healpy plotting: How do i make a figure with subplots using the healpy.mollview projection? - axes

I've just recently started trying to use healpy and i can't figure out how to make subplots to contain my maps. I have a thermal emission map of a planet as function of time and i need to look at it at several moments in time (lets say 9 different times) and superimpose some coordinates, to check that my planet is rotating the right way.
So far, i can do 2 things.
Make 9 different figures with the superimposed coordinates.
Make a figure with 9 subplots containing 9 different maps but that superimposes all of my coordinates on all of my subplots, instead of just the time-appropriate ones.
I'm not sure if this is a very simple problem but it's been driving me crazy and i cant find anything that works.
I'll show you what i mean:
OPTION 1:
import healpy as hp
import matplolib.pyplot as plt
MAX = 10**(23)
MIN = 10**10
for i in range(9):
t = 4000+10*i
hp.visufunc.mollview(Fmap_wvpix[t,:],
title = "Map at t="+str(t), min = MIN, max=MAX))
hp.visufunc.projplot(d[t,np.where(np.abs(d[t,:,2]-SSP[t])<0.5),1 ],
d[t,np.where(np.abs(d[t,:,2]-SSP[t])<0.5),2],
'k*',markersize = 6)
hp.visufunc.projplot(d[t,np.where(np.abs(d[t,:,2]-(SOP[t]))<0.2),1 ],
d[t,np.where(np.abs(d[t,:,2]-(SOP[t]))<0.2),2],
'r*',markersize = 6)
This makes 9 figures that look pretty much like this :
Flux map superimposed with some stars at time = t
But i need a lot of them so i want to make an image that contains 9 subplots that look like the image.
OPTION 2:
fig = plt.figure(figsize = (10,8))
for i in range(9):
t = 4000+10*i
hp.visufunc.mollview(Fmap_wvpix[t,:],
title = "Map at t="+str(t), min = MIN, max=MAX,
sub = int('33'+str(i+1)))
hp.visufunc.projplot(d[t,np.where(np.abs(d[t,:,2]-SSP[t])<0.5),1 ],
d[t,np.where(np.abs(d[t,:,2]-SSP[t])<0.5),2],
'k*',markersize = 6)
hp.visufunc.projplot(d[t,np.where(np.abs(d[t,:,2]-(SOP[t]))<0.2),1 ],
d[t,np.where(np.abs(d[t,:,2]-(SOP[t]))<0.2),2],
'r*',markersize = 6)
This gives me subplots but it draws all the projplot stars on all of my subplots! (see following image)
Subplots with too many stars
I know that i need a way to call the axes that has the time = t map and draw the stars for time = t on the appropriate map, but everything i've tried so far has failed. I've mostly tried to use projaxes thinking i can define a matplotlib axes and draw the stars on it but it doesnt work. Any advice?
Also, i would like to draw some lines on my map as well but i cant figure out how to do that either. The documentation says projplot but it won't draw anyting if i don't tell it i want a marker.
PS: This code is probably useless to you as it won't work if you don't have my arrays. Here's a simpler version that should run:
import numpy as np
import healpy as hp
import matplotlib.pyplot as plt
NSIDE = 8
m = np.arange(hp.nside2npix(NSIDE))*1
MAX = 900
MIN = 0
fig = plt.figure(figsize = (10,8))
for i in range(9):
t = 4000+10*i
hp.visufunc.mollview(m+100*i, title = "Map at t="+str(t), min = MIN, max=MAX,
sub = int('33'+str(i+1)))
hp.visufunc.projplot(1.5,0+30*i, 'k*',markersize = 16)
So this is supposed to give me one star for each frame and the star is supposed to be moving. But instead it's drawing all the stars on all the frames.
What can i do? I don't understand the documentation.

If you want to have healpy plots in matplotlib subplots, the following would be the way to go. The key is to use plt.axes() to select the active subplot and to use the hold=True keyword in the healpy functions.
import healpy as hp
import numpy as np
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(ncols=2)
plt.axes(ax1)
hp.mollview(np.random.random(hp.nside2npix(32)), hold=True)
plt.axes(ax2)
hp.mollview(np.arange(hp.nside2npix(32)), hold=True)

I have just encountered this question looking for a solution to the same problem, but managed to find it from the documentation of mollview (here).
As you notice there, they say that 'sub' received the same syntax as the function subplot (from matplotlib). This format is:
( # of rows, # of columns, # of current subplot)
E.g. to make your plot, the value sub wants to receive in each iteration is
sub=(3,3,i)
Where i runs from 1 to 9 (3*3).
This worked for me, I haven't tried this with your code, but should work.
Hope this helps!

Related

How is Spark reading my image using the image format?

It might be a silly question but I can't figure out how Spark read my image using the spark.read.format("image").load(....) argument.
After importing my image which gives me the following:
>>> image_df.select("image.height","image.width","image.nChannels", "image.mode", "image.data").show()
+------+-----+---------+----+--------------------+
|height|width|nChannels|mode| data|
+------+-----+---------+----+--------------------+
| 430| 470| 3| 16|[4D 55 4E 4C 54 4...|
+------+-----+---------+----+--------------------+
I arrive to the conclusion that:
my image is 430x470 pixels,
my image is colored (RGB due to nChannels = 3) which is an openCV compatible-type,
my image mode is 16 which corresponds to a particular openCV byte-order.
Does someone knows which website/documentation I could browse to know more about it?
the data in the data column is of type Binary but:
when I run image_df.select("image.data").take(1) I got an output which seems to be only one array (see below).
>>> image_df.select("image.data").take(1)
# **1/** Here are the last elements of the result
....<<One Eternity Later>>....x92\x89\x8a\x8d\x84\x86\x89\x80\x84\x87~'))]
# 2/ I got also several part of the result which looks like:
.....\x89\x80\x80\x83z|\x7fvz}tpsjqtkrulsvmsvmsvmrulrulrulqtkpsjnqhnqhmpgmpgmpgnqhnqhn
qhnqhnqhnqhnqhnqhmpgmpgmpgmpgmpgmpgmpgmpgnqhnqhnqhnqhnqhnqhnqhnqhknejmdilcilchkbh
kbilcilckneloflofmpgnqhorioripsjsvmsvmtwnvypx{ry|sz}t{~ux{ry|sy|sy|sy|sz}tz}tz}tz}
ty|sy|sy|sy|sz}t{~u|\x7fv|\x7fv}.....
What come next are linked to the results displayed above. Those might be due to my lack of knowledge concerning openCV (or else). Nonetheless:
1/ I don't understand the fact that if I got an RGB image, I should have 3 matrix but the output finishes by .......\x84\x87~'))]. I was more thinking on obtaining something like [(...),(...),(...\x87~')].
2/ Is this part has a special meaning? Like those are the separator between each matrix or something?
To be more clear about what I'm trying to achieve, I want to process images to do pixel comparison between each images. Therefore, I want to know the pixel values for a given position in my image (I assume that if I have an RGB image, I shall have 3 pixel values for a given position).
Example: let's say that I have a webcam pointing to the sky only during the day and I want to know the values of a pixel at a position corresponding to the top left sky part, I found out that the concatenation of those values gives the colour Light Blue which says that the photo was taken on a sunny day. Let's say that the only possibility is that a sunny day takes the colour Light Blue.
Next I want to compare the previous concatenation with another concat of pixel values at the exact same position but from a picture taken the next day. If I found out that they are not equal then I conclude that the given picture was taken on a cloudy/rainy day. If equal then sunny day.
Any help on that would be highly appreciated. I have vulgarized my example for a better understanding but my goal is pretty much the same. I know that ML model can exist to achieve those stuff but I would be happy to try this first. My first goal is to split this column into 3 columns corresponding to each color code: a red matrix, a green matrix, a blue matrix
I think I have the logic. I used the keras.preprocessing.image.img_to_array() function to understand how the values are classified (since I have an RGB image, I must have 3 matrix: one for each color R G B). Posting that if someone wonder how it works, I might be wrong but I think I have something :
from keras.preprocessing import image
import numpy as np
from PIL import Image
# Using spark built-in data source
first_img = spark.read.format("image").schema(imageSchema).load(".....")
raw = first_img.select("image.data").take(1)[0][0]
np.shape(raw)
(606300,) # which is 470*430*3
# Using keras function
img = image.load_img(".../path/to/img")
yy = image.img_to_array(img)
>>> np.shape(yy)
(430, 470, 3) # the form is good but I have a problem of order since:
>>> raw[0], raw[1], raw[2]
(77, 85, 78)
>>> yy[0][0]
array([78., 85., 77.], dtype=float32)
# Therefore I used the numpy reshape function directly on raw
# to have 470 matrix of 3 lines and 470 columns:
array = np.reshape(raw, (430,470,3))
xx = image.img_to_array(array) # OPTIONAL and not used here
>>> array[0][0] == (raw[0],raw[1],raw[2])
array([ True, True, True])
>>> array[0][1] == (raw[3],raw[4],raw[5])
array([ True, True, True])
>>> array[0][2] == (raw[6],raw[7],raw[8])
array([ True, True, True])
>>> array[0][3] == (raw[9],raw[10],raw[11])
array([ True, True, True])
So if I understood well, spark will read the image as a big array - (606300,) here - where in fact each element are ordered and corresponds to their respective color shade (R G B).
After doing my little transformations, I obtain 430 matrix of 3 columns x 470 lines. Since my image is (470x430) for (WidthxHeight), each matrix corresponds to a pixel heigth position and inside each: 3 columns for each color and 470 lines for each width position.
Hope that helps someone :)!

Extracting table structures from image

I have a bunch of images like
What would be the good way to extract just the table structure from the image? I'm only interested extracting the straight lines.
I have been toying around with OpenCV Finding Contours code sample and the results are quite promising. I'm just wondering if there is maybe a better way?
OpenCV has a nice way to detect line segments. Here is a code snippet in python:
import math
import numpy as np
import cv2
img = cv2.imread('page2.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
lsd = cv2.createLineSegmentDetector(0)
dlines = lsd.detect(gray)
for dline in dlines[0]:
x0 = int(round(dline[0][0]))
y0 = int(round(dline[0][1]))
x1 = int(round(dline[0][2]))
y1 = int(round(dline[0][3]))
cv2.line(img, (x0, y0), (x1,y1), 255, 1, cv2.LINE_AA)
# print line segment length
a = (x0-x1) * (x0-x1)
b = (y0-y1) * (y0-y1)
c = a + b
print(math.sqrt(c))
cv2.imwrite('page2_lines.png', img)
Kindly go through my Github repository Code for table extraction
The developed code detect table and extract out information by keeping the spatial coordinates intact.
The code detects lines from tables as shown in an image below. I hope it solves your problem.
The extracted output in terms of a table is shown below.

Count number of objects using watershed algorithm - Scikit-image

I am trying to find the number of objects in a given image using watershed segmentation. Consider for example the coins image. Here I would like to know the number of coins in the image. I implemented the code available at Scikit-image documentation and tweaked with it a little and got results similar to those displayed on the documentation page.
After looking at functions used in the code in detail I found out that ndimage.label() also returns number of unique objects found in the image (mentioned in it's documentation), but when I print that value I am getting 53 which is very high as compared to the number of coins in the actual image.
Can somebody suggest some method to find the number of objects in an image.
Here is a version of your code that counts the coins in one of two ways: a) by directly segmenting the distance image and b) by doing watershed first and rejecting tiny intersecting regions.
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from skimage import io, color, filter as filters
from scipy import ndimage
from skimage.morphology import watershed
from skimage.feature import peak_local_max
from skimage.measure import regionprops, label
image = color.rgb2gray(io.imread('water_coins.jpg', plugin='freeimage'))
image = image < filters.threshold_otsu(image)
distance = ndimage.distance_transform_edt(image)
# Here's one way to measure the number of coins directly
# from the distance map
coin_centres = (distance > 0.8 * distance.max())
print('Number of coins (method 1):', np.max(label(coin_centres)))
# Or you can proceed with the watershed labeling
local_maxi = peak_local_max(distance, indices=False, footprint=np.ones((3, 3)),
labels=image)
markers, num_features = ndimage.label(local_maxi)
labels = watershed(-distance, markers, mask=image)
# ...but then you have to clean up the tiny intersections between coins
regions = regionprops(labels)
regions = [r for r in regions if r.area > 50]
print('Number of coins (method 2):', len(regions) - 1)
fig, axes = plt.subplots(ncols=3, figsize=(8, 2.7))
ax0, ax1, ax2 = axes
ax0.imshow(image, cmap=plt.cm.gray, interpolation='nearest')
ax0.set_title('Overlapping objects')
ax1.imshow(-distance, cmap=plt.cm.jet, interpolation='nearest')
ax1.set_title('Distances')
ax2.imshow(labels, cmap=plt.cm.spectral, interpolation='nearest')
ax2.set_title('Separated objects')
for ax in axes:
ax.axis('off')
fig.subplots_adjust(hspace=0.01, wspace=0.01, top=1, bottom=0, left=0,
right=1)
plt.show()

Examples/tutorials of Adaptive Metropolis for images using PyMC

I am looking for examples or tutorials of the AdaptiveMetropolis step method used for image processing.
The only vaguely image-related resource that I have found until now is this astronomy dissertation and the related GitHub repo.
This wider question does not seem to provide PyMC example code.
What about finding the peak on this simulated array?
import numpy as np
from matplotlib import pyplot as plt
sz = (12,18)
data_input = np.random.normal( loc=5.0, size=sz )
data_input[7:10, 2:6] = np.random.normal( loc=100.0, size=(3,4) )
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
im = ax.imshow( data_input )
ax.set_title("input")
The closest I know of is here: http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter5_LossFunctions/LossFunctions.ipynb (see
Example: Kaggle contest on Observing Dark World)
On this thread you asked a specific question: https://github.com/pymc-devs/pymc/issues/653 about finding an array in an image. Here is a first attempt at a model:
In that case it seems like you are trying to estimate a 2-D uniform distribution with gaussian noise. You'll have to translate this into an actual model but this would be one idea:
lower_x ~ DiscreteUniform(0, 20)
upper_x ~ DiscreteUniform(0,20)
lower_y ~ DiscreteUniform(0, 20)
upper_y ~ DiscreteUniform(0, 20)
height ~ Normal(100, 1)
noise ~ InvGamma(1, 1)
means = zeros((20, 20))
means[[lower_x:upper_x,lower_y: upper_y]] = height # this needs to be
a deterministic
data ~ Normal(mu=means, sd=noise)
It might be better to code upper_x as an offset and then do lower_x:lower_x+offset_x, otherwise you need a potential to enforce lower_x < upper_x.

Horizontal Histogram in OpenCV

I am newbie to OpenCV,now I am making a senior project related Image processing. I have a question: Can I make a horizontal or vertical histogram with some functions of OpenCV?
Thanks,
Truong
The most efficient way to do this is by using the cvReduce function. There's a parameter to allow to select if you want an horizontal or vertical projection.
You can also do it by hand with the functions cvGetCol and cvGetRow combined with cvSum.
Based on the link you provided in a comment, this is what I believe you're trying to do.
You want to create an array with n elements, where n is the number of columns in the input image. The value of the nth element of the array is the sum of all the pixels in the nth column.
You can calculate this array by looping over the columns of the input image, using cvGetSubRect to access the pixels in that column, and cvSum to sum those pixels.
Here is some Python code that does that, assuming a grayscale image:
import cv
def verticalProjection(img):
"Return a list containing the sum of the pixels in each column"
(w,h) = cv.GetSize(img)
sumCols = []
for j in range(w):
col = cv.GetSubRect(img, (j,0,1,h))
sumCols.append(cv.Sum(col)[0])
return sumCols
Updating carnieri answer (some cv functions are not working today)
import numpy as np
import cv2
def verticalProjection(img):
"Return a list containing the sum of the pixels in each column"
(h, w) = img.shape[:2]
sumCols = []
for j in range(w):
col = img[0:h, j:j+1] # y1:y2, x1:x2
sumCols.append(np.sum(col))
return sumCols
Regards.
An example of using cv2.reduce with OpenCV 3 in Python :
import numpy as np
import cv2
img = cv2.imread("test_1.png")
x_sum = cv2.reduce(img, 0, cv2.REDUCE_SUM, dtype=cv2.CV_32S)
y_sum = cv2.reduce(img, 1, cv2.REDUCE_SUM, dtype=cv2.CV_32S)

Resources