I use imread to read an image (about 1.5GB), but the image.data[x] value always be 0.
Is there any parameters I can use?
Or,is there any other API that I can use to load this image.
Thanks
ubuntu 12.04
Image information :
XXX.tif
1.21GB
Dimensions : 34795 * 37552
Horizontal resolution : 150 dpi
Vertical resolution : 150 dpi
bit depth : 8
The cv::Mat is a continuous memory space allocation. It might be almost impossible to allocate 1.5GB of continuous memory space. Try splitting the file either spatially or based on its different channels.
Related
I am going to solve a binary high resolution segmentation problem. Positive pixels are marked as same value while negative pixels are all zero. The input image is scaled to 1/4 by bi-cubic interpolation.
After scaling, the pixel values of positive labels are not all the same. So how to process these label images to make it still a binary segmentation problem? Just set the pixels which are larger than 0 to positive or set the pixels which larger than a threshold to positive?
If the answer is the latter one, how to set the threshold?
I suggest you do not use built-in resize functions, such as zoom or imresize. Suppose you have a binary mask of size 225 * 225, then the central point is (113, 113), start from this central point, sub-sample the points in all four directions with equal steps,(like 4). And finally you will find you have 4 different sample ways, average them.
Task at hand is to split an available BGR(raw) image into N equal number image. Can someone give me hint on storage of BGR -raw images in memory
For example:
If I have 1920 * 1080 pixels BGR image, and I would like to split it into 8 equal parts then is there any available framework that can help me. I'm trying to write native CPP code on Android, working with OpenCV would be expensive to do, any other alternative
This may be a basic question, but I'm new to openCV and I find the documentation to be very poor. I think I would like to use the resize function to get a new image at the same size as the original, but at a lower resolution.
All documentation I find acts as if resolution and size are the same thing and I have absolutely no idea what these parameters mean. Different sources seem tho show a different version of resize than what I see in the headers:
CV_EXPORTS_W void resize( InputArray src, OutputArray dst,
Size dsize, double fx=0, double fy=0,
int interpolation=INTER_LINEAR );
If I keep dsize the same size as my original, what do x and y represent and how would I get a resolution of say 72 dpi?
Let's me explain something straight: when you load your image to a memory, you have, in a good approximation, matrix of numbers with given amount of rows and cols. And from the definition of dpi, which is amount of individual dots that can be placed in line of one inch, you have a lack of "inch" in the memory. How would you define dpi in case of matrix stored in memory? It makes no sense to talk about it only according to the memory. So that is way in opencv (and perhaps in any other processing library) you have resolution and size concepts equal.
Maybe you would like to achieve something as "artificial" dpi lowering? Something that "looks like" image being printed with lower dpi? In that case, why don't you try resizing down and up the same image iteratively to achieve this result.
And cv::resize() function does change the size either by given destination size (param dsize) or scale factors (fx and fy).
How to interpret texture memory information output by deviceQuery sample to know texture memory size?
Here is output of my texture memory.
Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65535),3D=(2048,2048,2048)
Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048
It is a common misconception, but there is no such thing as "texture memory" in CUDA GPUs. There are only textures, which are global memory allocations accessed through dedicated hardware which has inbuilt cache, filtering and addressing limitations which lead to the size limits you see reported in the documentation and device query. So the limit is either roughly the free amount of global memory (allowing for padding and alignment in CUDA arrays) or the dimensional limits you already quoted.
The output shows that the maximum texture dimensions are:
For 1D textures 65536
For 2D textures 65536*65535
For 3D textures 2048*2048*2048
If you want the size in bytes, multiply that by the maximum number of channels (4) and the maximum sub-pixel size (4B).
(For layered textures, multiply the relevant numbers you got for the dimensions by the number of maximum layers you got.)
However, this is the maximum size for a single texture, not the available memory for all textures.
Actually I want to scale Image used in iPhone to iPad.
I have one Image of resolution 300 dpi.
Its size is 320 * 127.
Maximum how much can I scale this Image so that It will not blur ?
As I am stuck with the relation between resolution of an Image and its maximum dimensions.
I don't think you understand the idea of resolution.
Your requirements are simply "so that It will not blur." If you scale an image larger or smaller than its native resolution (which I'm assuming is your "320 * 127"), the display device has to either reduce or increase the number of pixels. It does this by interpolating, or "blurring" the pixel colors.
Now, if you're asking how much can you alter an image's scale so that a human eye can't tell the difference, that's a different question.