Following if the code to initiate a 3D array in cuda with size being width = 809; hight = 127; and number of layers = 2160;
cudaArray *sinor;
cudaExtent volumeSize = make_cudaExtent(809, 127, 2160);
const cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc<float>();
gpuErrchk(cudaMalloc3DArray(&sinor, &channelDesc, volumeSize, cudaArrayLayered));
last line returns error "invalid argument" error. Is that because my number of layer is too large? I tried 1940, and it was fine. If I cannot do such a large number of layers, what is the work around here ? Thanks alot.
You can find the texture layer depth limit on the documentation here. As you inferred, the depth limit for layered textures and surfaces is 2048.
As was suggested in comments, your only real workaround here is to split your data over multiple texture objects and select between the objects based on index within the virtual combined textures.
Related
This is the architecture of YOLO. I am trying to calculate the output size of each layer myself, but I can't get the size as described in the paper.
For example, in the first Conv Layer, the input size is 448x448 but it uses a 7x7 filter with stride 2, but according to this equation W2=(W1−F+2P)/S+1 = (448 - 7 + 0)/2 + 1, I can't get an integer result, so the filter size seems to be unsuitable to the input size.
So anyone can explain this problem? Did I miss something or misunderstand the YOLO architecture?
As Hawx Won said, the input image has been added extra 3 paddings, and here is how it works from the source code.
For convolution layers, if pad is enabled, The padding value of each layer will be calculated by:
# In parser.c
if(pad) padding = size/2;
# In convolutional_layer.c
l.pad = padding;
Where size is the shape of the filter.
So, for the first layer: padding = size/2 = 7/2=3
Then the output of first convolutional layer should be:
output_w = (input_w+2*pad-size)/stride+1 = (448+6-7)/2+1 = 224
output_h = (input_h+2*pad-size)/stride+1 = (448+6-7)/2+1 = 224
Well, I spent some time learning the source code, and learned about that the input image has added extra 3 paddings on top,down,left and right side of the image, so the image size becomes (448+2x3)=454, the out put size of valid padding should be calculated in this way:
Output_size=ceil((W-F+1)/S)=(454-7+1)/2=224, therefore, output size should be 224x224x64
I hope this could be helpful
I have a hard time solving the issue with mask creation.My image is large,
40959px X 24575px and im trying to create a mask for it.
I noticed that i dont have a problem for images up to certain size(I tested about 33000px X 22000px), but for dimensions larger than that i get an error inside my mask(Error is that it gets black in the middle of the polygon and white region extends itself to the left edge.Result should be without black area inside polygon and no white area extending to the left edge of image).
So my code looks like this:
pixel_points_list = latLonToPixel(dataSet, lat_lon_pairs)
print pixel_points_list
# This is the list im getting
#[[213, 6259], [22301, 23608], [25363, 22223], [27477, 23608], [35058, 18433], [12168, 282], [213, 6259]]
image = cv2.imread(in_tmpImgFilePath,-1)
print image.shape
#Value of image.shape: (24575, 40959, 4)
mask = np.zeros(image.shape, dtype=np.uint8)
roi_corners = np.array([pixel_points_list], dtype=np.int32)
print roi_corners
#contents of roi_corners_array:
"""
[[[ 213 6259]
[22301 23608]
[25363 22223]
[27477 23608]
[35058 18433]
[12168 282]
[ 213 6259]]]
"""
channel_count = image.shape[2]
ignore_mask_color = (255,)*channel_count
cv2.fillPoly(mask, roi_corners, ignore_mask_color)
cv2.imwrite("mask.tif",mask)
And this is the mask im getting with those coordinates(minified mask):
You see that in the middle of the mask the mask is mirrored.I took those points from pixel_points_list and drawn them on coordinate system and im getting valid polygon, but when using fillPoly im getting wrong results.
Here is even simpler example where i have only 4(5) points:
roi_corners = array([[ 213 6259]
[22301 23608]
[35058 18433]
[12168 282]
[ 213 6259]])
And i get
Does anyone have a clue why does this happen?
Thanks!
The issue is in the function CollectPolyEdges, called by fillPoly (and drawContours, fillConvexPoly, etc...).
Internally, it's assumed that the point coordinates (of integer type int32) have meaningful values only in the 16 lowest bits. In practice, you can draw correctly only if your points have coordinates up to 32768 (which is exactly the maximum x coordinate you can draw in your image.)
This can't be considered as a bug, since your images are extremely large.
As a workaround, you can try to scale your mask and your points by a given factor, fill the poly on the smaller mask, and then re-scale the mask back to original size
As #DanMašek pointed out in the comments, this is in fact a bug, not fixed, yet.
In the bug discussion, there is another workaround mentioned. It consists on drawing using multiple ROIs with size less than 32768, correcting coordinates for each ROI using the offset parameter in fillPoly.
VFSGroupDataset<FImage> dataset = new VFSGroupDataset<FImage>(
"zip:file:/Users/nhnguyen/Data/newArchive.zip",
ImageUtilities.FIMAGE_READER);
int nTraining = 50;
int nTesting = 5;
GroupedRandomSplitter<String, FImage> splits =
new GroupedRandomSplitter<String, FImage>(dataset, nTraining, 0, nTesting);
GroupedDataset<String, ListDataset<FImage>, FImage> training = splits.getTrainingDataset();
GroupedDataset<String, ListDataset<FImage>, FImage> testing = splits.getTestDataset();
List<FImage> basisImages = DatasetAdaptors.asList(training);
int nEigenvectors = 100;
EigenImages eigen = new EigenImages(nEigenvectors);
eigen.train(basisImages);
I have the above code to test the EigenImages tutorial with my own set of data. What I am stuck at is that it would throw Exception with Matrix if in my data set, images are varies of dimension, say 92x112 and 100x100 and so on... When I do a batch resize to a same size then it work, however, these distort the image a little bit which I worried will affect the accuracy.
Is there away to train the eigen recognize to accept input with various dimension?
No, the Eigenfaces approach inherently requires that all images are the same size and are also at least approximately aligned (i.e. same orientation, eyes in about the same place).
You might however be able to automate the scaling and alignment by using one of the OpenIMAJ FaceAligner implementations.
I am working on project related to face recognition. For my program to work each image should satisfy the condition img->widthStep = 3 * img->width.
I am trying my code on database in which each image is of size 250x250. But the widthstep for the database is 752 hence the above condition does not satisfy. The function of widthstep is in accessing the pixel (http://opencv-users.1802565.n2.nabble.com/What-is-widthstep-td2679559.html).
Can I change the widthstep parameter to 750 without affecting other parameters of image?
Or else is there other way to achieve the condition zimg->widthStep = 3 * img->widthz?
I tried copying the 250x250 to 260x260 image as follows
Mat img1, img2=Mat::zeros(Size(260,260),CV_8UC3);
img1 = imread(ch);
img1.copyTo(img2.colRange(1,250).rowRange(1,250));
But it shows this error:
OpenCV Error: Assertion failed
(!fixedSize() || ((Mat*)obj)->size.operator()() =
= Size(cols, rows)) in unknown function, file D:\opencv2.4.5\opencv\modules\core
\src\matrix.cpp, line 1372
Can anyone help me out.
Thank you!
Since you are using term widthStep I guess you are using IplImage. IplImage was taken from Intel Performance Primitives (IPP) library. In order to have good performance it is required that widthStep of each row should be multiple of 4. To enforce this condition rows are padded with addition bytes. So as long as you are using IplImage you won't be able to have widthStep equal to 750 which is not multiple of 4.
OpenCV 1 was based on IplImage, but OpenCV 2 is based on Mat. Its been a years since IplImage was deprecated.
Mat has no such limitation. By default its step will be 750.
After edit of the question:
colRange(1,250) means 249 columns, not 250. Same for rowRange(1,250). When size of the image being copied is different from size of target image, target image is reallocated. But since colRange and rowRange return constant temporary image it can't be reallocated and the program crashes.
I use zeros to initialize my matrix like this:
height = 352
width = 288
nFrames = 120
imgYuv=zeros([height,width,3,nFrames]);
However, when I set the value of nFrames larger than 120, MATLAB gives me an error message saying out of memory.
The original function is
[imgYuv, S, A]= changeYuv(fileName, width, height, idxFrame, nFrames)
my command is
[imgYuv,S,A]=changeYuv('tilt.yuv',352,288,1:120,120);
Can anyone please tell me what's going on here?
PS: one of the purposes of the function is to load a yuv video which consists more than 2000 frames. Is there any possibility to implement that?
There are three ways to avoid the error
Process a limited number of
frames at any given time.
Work
with integer arrays. Most movies are
in 8-bit format, while Matlab
normally works with doubles.
uint8 takes 1 byte per element,
while double takes 8 bytes. Thus,
if you create your array as B =
zeros(height,width,3,nFrames,'uint8)`,
it only uses 1/8th of the memory.
This might work for 120 frames,
though for 2000 frames, you'll run
again into trouble. Note that not
all Matlab functions work for
integer arrays; you may have to
reimplement those that require
double.
Buy more RAM.
Yes, you (or rather, your Matlab session) are running out of memory.
Get out your calculator and find the product height x width x 3 x nFrames x 8 which will tell you how much memory you have tried to get in your call to zeros. That will be a number either close to or in excess of the RAM available to Matlab on your computer.
Your command is:
[imgYuv,S,A]=changeYuv('tilt.yuv',352,288,1:120,120);
That is:
352*288*120*120 = 1459814400
That is 1.4 * 10^9. If one object has 4 bytes, then you need 6GB. That is a lot of memory...
Referencing the code I've seen in your withdrawn post, your calculating the difference between adjacent frame histograms. One option to avoid massive memory allocation might be to just hold two frames in memory, instead of reading all the frames at once.
The function B = zeros([d1 d2 d3...]) creates an multi-dimensional array with dimensions d1*d2*d3*...
Depending on width and height, given the 3rd dimension of 3 and the 4th dimension of 120 (which effectively results in width*height*360), may result in a very huge array. There are certain memory limits on every machine, maybe you reached these... ;)