I have a HDF5 file dataset which contains different data types(int and float).
While reading it in numpy array, it detects it as array of type np.void.
import numpy as np
import h5py
f = h5py.File('Sample.h5', 'r')
array = np.array(f['/Group1/Dataset'])
print(array.dtype)
Image of the data types {print(array.dtype)}
How can I read this dataset into arrays with each column as the same data type as that of input? Thanks in advance for the reply
Here are 2 simple examples showing both ways to slice a subset of the dataset using the HDF5 Field/Column names.
The first method extracts a subset of the data to a record array by slicing when accessing the dataset.
The second method follows your current method. It extracts the entire dataset to a record array, then slices a new view to access a subset of the data.
Print statements are used liberally so you can see what's going on.
Method 1
real_array= np.array(f['/Group1/Dataset'][:,'XR','YR','ZR'])
print(real_array.dtype)
print(real_array.shape)
Method 2
cmplx_array = np.array(f['/Group1/Dataset'])
print(cmplx_array.dtype)
print(cmplx_array.shape)
disp_real = cmplx_array[['XR','YR','ZR']]
print(disp_real.dtype)
print(disp_real.shape)
Review this SO topic for additional insights into copying values from a recarray to a ndarray, and back.
copy-numpy-recarray-to-ndarray
Is there a way to handle the DataLoader as a list ? The idea is that I want to shuffle implicit pairs of images, without setting the shuffling into True
Basically, I have for example 10 scenes, each containing let's say 100 sequences, so they are represented inside the directory as
'1_1.png', '1_2.png', '1_3.png', '....., '2_1.png', '2_2.png', '2_3.png', ...., '3_1.png', '3_2.png', '3_3.png', ..., ...., '10_1.png', '10_2.png', '10_3.png', ...
I don't want complete shuffling of data, what I want simply is to shuffle but keeping pairs, so they are represented in the data loader as
[ '1_3.png', '1_4.png', '2_2.png', '2_3.png', '10_1.png', '10_2.png', '1_2.png', '1_3.png', ...]
and so on
Please have a look at this question which I have already asked on Stack Overflow concerning shuffling array of implicit pairs, where you can understand what I mean
As an example:
if this is a list
L = [['1_1'],['1_2'],['1_3'],['1_4'],['1_5'],['1_6'],['2_1'],['2_2'],['2_3'],['2_4'],['2_5'],['2_6'],['3_1'],['3_2'],['3_3'],['3_4'],['3_5'],['3_6']]
then this is the output
[['1_2'], ['1_3'], ['2_1'], ['2_2'], ['2_4'], ['2_5'],
['2_2'], ['2_3'], ['1_3'], ['1_4'], ['3_4'], ['3_5'],
['3_3'], ['3_4'], ['3_2'], ['3_3'], ['1_6'], ['2_1'],
['2_5'], ['2_6'], ['2_6'], ['3_1'], ['1_4'], ['1_5'],
['1_1'], ['1_2'], ['2_3'], ['2_4'], ['1_5'], ['1_6'],
['3_1'], ['3_2'], ['3_5'], ['3_6']]
I want to achieve the same for a DataLoader
The main idea, is that I want to train my network on sequential frames, but it doesn't have to be the complete sequence, but at least I need each step, two sequences are there
I think you are looking for data.Sampler: instead of the completely radom default shuffle of data.DataLoader, you can provide your own "sampler" that sample examples from your Dataset.
Looking at the input parameters of data.DataLoader:
sampler (Sampler, optional) – defines the strategy to draw samples
from the dataset. If specified, shuffle must be False.
I think a good starting point for is too look at the code of data.SubsetRandomSampler.
I have been working with a FORTRAN program. I have noticed seemingly random changes in a 1D matrix I'm working with. It is a matrix of 4000 integers. Values are added to the matrix one by one, starting with index 1 and iterating by 1 for each added value. The matrix does not get fully "filled", currently only 100 values are placed into the matrix. So one would expect that the first 100 entries of the matrix will be non-zero (all added values are non-zero) and the remaining 3900 entries will be 0. However, several of the entries of the matrix end up being large negative numbers, but I'm certain that no portion of my code touches these entries.
What could be causing this issue? I'm sorry but I can't post the code for you all to work with.
The code has several other large matrices, taking up a total of ~100 MB of space. Could this potentially be a memory issue?
Thanks!
You have to initialize your array, otherwise it will almost always contain garbage. This would do it:
array = 0.0e0 ! real array
or
array = 0.0e0 ! double precision
or
array = 0 ! integer
A "matrix" is two-dimensional; your array is one-dimensional.
Things do not change unless you ask them to change.
FORTRAN does not initialize variables other than (as I recall) in a labeled COMMON. As such, they are guaranteed to start out with garbage values. Try initializing your data with a DATA statement. If you have to initialize a labeled COMMON, you will have to supply a BLOCK DATA subprogram.
I have no idea for how to implement matrix implementation efficiently in OpenCV.
I have binary Mat nz(150,600) with 0 and 1 elements.
I have Mat mk(150,600) with double values.
I like to implement as in Matlab as
sk = mk(nz);
That command copy mk to sk only for those element of mk element at the location where nz has 1. Then make sk into a row matrix.
How can I implement it in OpenCV efficiently for speed and memory?
You should take a look at Mat::copyTo and Mat::clone.
copyTo will make an copy with optional mask where its non-zero elements indicate which matrix elements need to be copied.
mk.copyTo(sk, nz);
And if you really want a row matrix then call sk.reshape() as member sansuiso already suggested. This method ...
creates alternative matrix header for the same data, with different
number of channels and/or different number of rows.
bkausbk gave the best answer. However, a second way around:
A=bitwise_and(nz,mk);
If you access A, you can copy the non-zero into a std::vector. If you want your output to be a cv::Mat instance then you have to allocate the memory first:
S=countNonZero(A); //size of the final output matrix
Now, fast element access is an actual topic of itself. Google it. Or have a look at opencv/modules/core/src/stat.cpp where countNonZero() is implemented to get some ideas.
There are two steps involved in your task.
First, you convert to double the input matrix:
cv::Mat binaryMat; // source matrix, filled somewhere
cv::Mat doubleMat; // target matrix (with doubles)
binaryMat.convertTo(doubleMat, CV64F); // Perform the conversion
Then, reshape the result as a row matrix:
doubleMat = cv::reshape(doubleMat, 1, 1);
// Alternatively:
cv::Mat doubleRow = cv::reshape(doubleMat, 1, 1);
The cv::reshape operation is efficient in the sense that the data is not copied, only the structure header changes.
This function returns a new reference to a matrix (by creating a new header), thus you should not forget to assign its result.
I am trying to implement a 1D DCT type II filter in Labview. The formula for this can be seen here
As you can see xk = the sum of a sum function involving an iteration of n.
As far as I know the nested for loop should handle the function with the shift registers keeping a running total of the output. My problem lies with the output the the matrix xk. There is either only one output to the matrix or each output over-writes the last output due to no indexig. trying to put the matrix inside the for loop results in an error between the shift register and the matrix:
You have connected two terminals of different types.
The source is a double and the sink is a 1D array of double
Anyone know how I can index the output to the array?
I believe this should work. Please check the math.
the inner for-loop will run either 8 times, or however many elements are in the array xn. LabVIEW uses whichever number is smaller to determine the iteration count. So if xn is empty, the for loop wont run at all. If it's 20, the for loop will run 8 times.
Regardless, the outer loop will always run 8 times, so xk will have 8 elements total.
Also, shift registers that do not initialize a value at the beginning of a for or while loop can cause problems, unless you mean to do that. The value stored in the shift register after running the first time could be a problem the second time you go to run it.