How can i create a m * n * 3 arrays using numpy - image-processing

I am trying to create a matrix m x n x 3 base on some joints with cardinal cartesian x,y,z. First, I arrange the indices of the joints in the first image skeleton image into a 2D grid 2D grid
A = np.array([[[4, 3, 21, 2, 1, 13, 14, 15, 16], [4, 3, 21, 2, 1, 17, 18, 19, 20], [4, 3, 21, 9, 10, 11, 12, 24, 25], [4, 3, 21, 5,6, 7, 8, 22, 23]]])
What I cannot do is to add the cartesian coordinate (x,y,z) of those indices along the third dimension of my matrice A in order to get m x n x 3. The x,y,z of each joint will be similar to the Chanel R, G, B of a color image with R=x, g=y, b=z

The the example code below produces a matrix mxnx3 from matrix A, where A illustrate indexes of elements in a skeleton, resulting in a A_result like:
# Create mxnx3 matrix from matrix A, where A include index name of skeleton element in each mxn location
import numpy as np
from matplotlib import pyplot as plt
A = np.array([[[4, 3, 21, 2, 1, 13, 14, 15, 16], [4, 3, 21, 2, 1, 17, 18, 19, 20], [4, 3, 21, 9, 10, 11, 12, 24, 25], [4, 3, 21, 5,6, 7, 8, 22, 23]]])
#Let's rearrange slightly...
A_sub=A[0][:][:]
#Take the size of the matrix...
(m_max,n_max)=A_sub.shape
#Let's create basis for result matrix...note the format where mxn is first and after them the 3 dimensions of "colors"
A_result=np.zeros((A.shape[1],A.shape[2],3))
#demonstration function for the xyz coordinates...
def tell_me_xyz_coordinate_of_element(element_number):
#...perhaps in real application there is some measurement or the like functionality...
#...which investigate the element_number and then gives back its location...
#...but here to exemplify we simple return random int values back...
x=np.random.randint(0,255)
y=np.random.randint(0,255)
z=np.random.randint(0,255)
return x,y,z
#let's create the result matrix...
for m in range(m_max):
for n in range(n_max):
#Define x,y,z -values of the element in this m,n coordinate,
#where the value in m,n coordinate tells the number of corresponding element...
element_number=A_sub[m][n]
(x,y,z)=tell_me_xyz_coordinate_of_element(element_number)
#Set the results in the matrix...
A_result[m][n][0]=x
A_result[m][n][1]=y
A_result[m][n][2]=z
#Let's investigate the resulting nympy-matrix as a image...
#...remember to change the data format to uint8 to be able to investigate as a image
plt.imshow(np.uint8(A_result),interpolation='nearest')
title_text=''.join(["Result matrix mxnx3, \nwhere m=",str(m_max+1), " n=",str(n_max+1),",\n" "with color codes 0-255 in each m and n"])
plt.title(title_text)
plt.show()

This example may help you to solve your problem, but if not, please describe your question again in a more clear format.
#Create mxnx3 arrays and tensor form specific A matrix and then let's show how to create mxnx3 np-array and the same also in tensor format
import numpy as np
import tensorflow as tf
A = np.array([[[4, 3, 21, 2, 1, 13, 14, 15, 16], [4, 3, 21, 2, 1, 17, 18, 19, 20], [4, 3, 21, 9, 10, 11, 12, 24, 25], [4, 3, 21, 5,6, 7, 8, 22, 23]]])
#Let's convert A to a tensor:
A_in_tensorformat=tf.Variable(A)
#Let's make a numpy of size mxnx3:
m=123
n=45
B=np.ones((m,n,3))
B_in_tensorformat=tf.Variable(B)

Related

How can we reduce the size of the graph generated by Maximal Clique and remove the nodes of specific cliques?

I am using the networkx—find_cliques library for finding the maximal cliques in a graph. I want to reduce the size of this graph based on maximal cliques.
Here is the code:
from torch_geometric.utils.convert
import to_networkx from torch_geometric.data import Data
import networkx as nx
edge_list = torch.tensor([
[0, 1, 1, 2, 2, 2, 3, 4, 5, 6, 6, 6, 7, 7, 8 ], # Source Nodes
[1, 2, 3, 4, 5, 3, 9, 5, 6, 7, 8, 9, 8, 9, 9 ] # Target Nodes
], dtype=torch.long)
node_features = torch.tensor([
[-8, 1, 5, 8, 2, -3], # Features of Node 0
[-1, 0, 2, -3, 0, 1], # Features of Node 1
[1, -1, 0, -1, 2, 1], # Features of Node 2
[0, 1, 4, -2, 3, 4], # Features of Node 3
],dtype=torch.long)
data = Data(x=node_features, edge_index=edge_list, edge_attr=edge_weight)
G_directed = to_networkx(data)
G_undirected = G_directed.to_undirected()
no_cliques= nx.find_cliques(G, nodes=None)
print(No_cliques)
List of Maximal Cliques = {1,[1,2], 2,[2, 3, 4], 3,[4, 5, 6], 4,[6, 7], 5, [7, 8, 9, 10], 6, [10,3]}
In the next step, we reduce the size of the original graph in the coarsened graph as we consider one clique as one node and joint the edge based on this rule, joining two cliques if they are not disjoint. I want to remove such a clique whose node already appeared in other cliques. In the above example, the nodes of clique 6 are already assigned in cliques no 2 and 5. So in the new graph, this clique should be removed from the clique list.
For better understanding, I am posting the picture.
hierarchy of a graph as the coarsened graph at each level
I want to make this type of graph hierarchy based on maximal clique. Does anyone know about it? How can I do it?

Can I use cvxpy to split integer-2D-array to two arrays?

I have a problem that I wonder if I can solve using cvxpy:
The problem:
I have a two dimensional integers array and I want to split it to two array in a way that each row of the source array is either in the 1st or 2nd array.
The requirement from these arrays us that for each column, the sum of integers in array #1 will be as close as possible to twice the sum of integers in array #2.
Example:
Consider the input array:
[
[1, 2, 3, 4],
[4, 6, 2, 5],
[3, 9, 1, 2],
[8, 1, 0, 9],
[8, 4, 0, 5],
[9, 8, 0, 4]
]
The sums of its columns is [33, 30, 6, 29] so ideally we are looking for 2 arrays that the sums of their columns will be:
Array #1: [22, 20, 4, 19]
Array #2: [11, 10, 2, 10]
Off course this is not always possible but I looking for the best solution for this problem.
A possible solution for this specific example might be:
Array #1:
[
[1, 2, 3, 4],
[4, 6, 2, 5],
[8, 4, 0, 5],
[9, 8, 0, 4]
]
With column sums: [22, 20, 5, 18]
Array #2:
[
[3, 9, 1, 2],
[8, 1, 0, 9],
]
With column sums: [11, 10, 1, 11]
Any suggestions?
You can use a boolean vector variable to select rows. The only thing left to decide is how much to penalize errors. In this case I just used the norm of the difference vector.
import cvxpy as cp
import numpy as np
data = np.array([
[1, 2, 3, 4],
[4, 6, 2, 5],
[3, 9, 1, 2],
[8, 1, 0, 9],
[8, 4, 0, 5],
[9, 8, 0, 4]
])
x = cp.Variable(data.shape[0], boolean=True)
prob = cp.Problem(cp.Minimize(cp.norm((x - 2 * (1 - x)) * data)))
prob.solve()
A = np.round(x.value) # data
B = np.round(1 - x.value) # data
A and B are the sum of rows.
(array([21., 20., 4., 19.]), array([12., 10., 2., 10.]))

Dask equivalent of numpy (convolve + hstack)?

I currently have a function that computes a sliding sum across a 1-D numpy array (vector) using convolve and hstack. I would like to create an equivalent function using dask, but the various ways I've tried so far have not worked out.
What I'm trying to do is to compute a "sliding sum" of n numbers of an array, unless any of the numbers are NaN in which case the sum should also be NaN. The (n - 1) elements of the result should also be NaN, since no wrap around/reach behind is assumed.
For example:
input vector: [3, 4, 6, 2, 1, 3, 5, np.NaN, 8, 5, 6]
n: 3
result: [NaN, NaN, 13, 12, 9, 6, 9, NaN, NaN, NaN, 19]
or
input vector: [1, 5, 7, 2, 3, 4, 9, 6, 3, 8]
n: 4
result: [NaN, NaN, NaN, 15, 17, 16, 18, 22, 22, 26]
The function I currently have for this using numpy functions:
def sum_to_scale(values, scale):
# don't bother if the number of values to sum is 1 (will result in duplicate array)
if scale == 1:
return values
# get the valid sliding summations with 1D convolution
sliding_sums = np.convolve(values, np.ones(scale), mode="valid")
# pad the first (n - 1) elements of the array with NaN values
return np.hstack(([np.NaN] * (scale - 1), sliding_sums))
How can I do the above using the dask array API (and/or dask_image.ndfilters) to achieve the same functionality?

Remove multiple ranges of values from array

a1 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
a2 = [2..4, 8..11, 16..17]
Removing one range of values from an array can be done like this:
[1, 2, 3, 4, 5, 6, 7, 8, 9].slice!(2..5)
Iterating over the ranges and apply the same as above (a2.each { |range| a1.slice!(range) }) isn't perfect though. The ranges overlap sometimes and thus destroy the referencing index for the other ranges.
So, any suggestions on how to remove the ranges in a2 from a1 in the most efficient way?
a1 is normally [*0..10080] long. a2 has about 30 ranges, each containing hundreds of values.
If the result of the first operation impacts the second you're either going to have to track the resulting offset implications, which can get crazy complicated, or simply go about doing the reverse operation and instead flag which you want or don't want using the ranges:
require 'set'
a1 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
a2 = [2..4, 8..11, 16..17]
# Convert the ranges to a set of index values to remove
reject = Set.new(a2.flat_map(&:to_a))
# Using value/index pairs, accumulate those values which are
# not being excluded by their index.
a1.each_with_index.each_with_object([ ]) do |(v, i), a|
a << v unless (reject.include?(i))
end
# => [0, 1, 5, 6, 7, 12, 13, 14, 15, 18, 19, 20]
[-1, *a2.flat_map(&:minmax), a1.length].each_slice(2).flat_map{|i,j| a1[i+1...j]}
# => [0, 1, 5, 6, 7, 12, 13, 14, 15, 18, 19, 20]
I'm not sure this is the least naive solution, but it seems simple to convert your ranges into arrays so you're dealing with like-for-like:
a2.each{ |a| a1 = a1 - a.to_a }
a1 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
a2 = [2..4, 8..11, 16..17]
a1 - a2.flat_map(&:to_a)

Tensorflow conv2d_transpose size error "Number of rows of out_backprop doesn't match computed"

I am creating a convolution autoencoder in tensorflow. I got this exact error:
tensorflow.python.framework.errors.InvalidArgumentError: Conv2DBackpropInput: Number of rows of out_backprop doesn't match computed: actual = 8, computed = 12
[[Node: conv2d_transpose = Conv2DBackpropInput[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/cpu:0"](conv2d_transpose/output_shape, Variable_1/read, MaxPool_1)]]
Relevant code:
l1d = tf.nn.relu(tf.nn.conv2d_transpose(l1da, w2, [10, 12, 12, 32], strides=[1, 1, 1, 1], padding='SAME'))
where
w2 = tf.Variable(tf.random_normal([5, 5, 32, 64], stddev=0.01))
I checked the shape of the input to conv2d_transpose i.e. l1da and it is correct(10x8x8x64). The batch size is 10, input to this layer is in the form of 8x8x64, and the output is supposed to be 12x12x32.
What am I missing?
Found the error. Padding should be "Valid", not "Same".

Resources