Align vertex data - ios

I've read this post about vertex alignement and I'm not sure to understand everything.
The article say that I need to add an offset after each attributs to have aligned data (By the way, the article talk about 4 bytes but in his picture, they add two bytes).
In my situation, I've got something like that:
Position : 3 floats (3 * 4 bytes = 12)
Colors : 4 unsigned char (4 * 1 bytes = 4)
Uvs : 2 shorts (2 * 2 bytes = 4)
So, I've just to add 4 bytes for each attributs?
Thanks!

The article recommends 4-byte alignment per attribute. In the example, they use an attribute with 3 Shorts is used, which is 6 byte total, so the follwoing attribute will not be aligned to 4-byte-boundaries. Hence, they add two padding bytes.
In your case, all attributes are multiples of 4 bytes already, so you don't habe to add any padding to get 4-byte alignment.

Related

Can anyone please explain what RowSize and PixelArraySize mean in BMP file format?

I was writing a code for reading a BMP Image files, and I couldn't understand what RowSize and PixelArraySize mean or their operation, are these two formulas used for padding the row to a multiple of 4 bytes? Can anyone help me to understand this? Thanks a lot!
As is explained in the picture just below the formula, RowSize represents a single image row size in bytes, rounded (padded) to a nearest multiple of 4. This padding is often applied for performance reasons (memory alignment).
The formula shows 2 ways to calculate RowSize, padded to 4 bytes:
ceil(BitsPerPixel * ImageWidth / 32) * 4 - take row size in bits, divide by 32 (i.e. 4 bytes), round up, then multiply by 4 to get the number in bytes
floor((BitsPerPixel * ImageWidth + 31) / 32) * 4 - take row size in bits, add 31, divide by 32 (i.e. 4 bytes), round down, then multiply by 4 to get the number in bytes
You can see that the two ways are equivalent.
Version 2 is often preferred because rounding down in integer arithmetic happens implicitly:
int BitsPerPixel, ImageWidth;
. . .
int RowSize = ((BitsPerPixel * ImageWidth + 31) / 32) * 4; // Voila.
Now, PixelArraySize is just RowSize times the number of rows.
An image is an array of pixels.
Each pixel has a depth (normally 8 or 16 bits), the higher the value the brightest the pixel is. => this explains the BitsPerPixel in the equation.
So that Images can be stored and processed, they should normally have a storage size (not pixel size) that is multiple of 4. => if the rows are multiple or 4, no matter the high of the image, the image size will be multiple of 4. This is why you are finding the RowSize .
This is "easy" for black/white images, take in count that colored images normally have 3 values per pixel (for example the RGB format has a value for Red, another for Green and one for Blue) => they should also be kept multiple of 4

What is the block size in HDF5?

Quoting from the HDF5 Hyperslab doc -:
The block array determines the size of the element block selected from
the dataspace.
The example shows in a 2x2 dataset having the parameters set to the following-:
start offset is specified as [1,1], stride is [4,4], count is [3,7], and block is [2,2]
will result in 21 2x2 blocks. Where the selections will be (1,1), (5,1), (9,1), (1,5), (5,5) I can understand that because the starting point is (1,1) the selection starts at that point, also since the stride is (4,4) it moves 4 in each dimension, and the count is (3,7) it increments 3 times 4 in direction X and 7 times 4 in direction Y ie. in its corresponding dimension.
But what I don't understand is what is block size doing ? Does it mean that I will get 21 2x2 dimensional blocks ? That means each block contains 4 elements, but the count is already set in 3 in 1 dimension so how will that be possible ?
A hyperslab selection created through H5Sselect_hypserslab() lets you create a region defined by a repeating block of elements.
This is described in section 7.4.2.2 of the HDF5 users guide found here (scroll down a bit to 7.4.2.2). The H5Sselect_hyperslab() reference manual entry might also be helpful.
Here is a diagram from the UG:
And here are the values used in that figure:
offset = (0,1)
stride = (4,3)
count = (2,4)
block = (3,2)
Notice how the repeating unit is a 3x2 element block. So yes, you will get 21 2x2 blocks in your case. There will be a grid of three blocks in one dimension and seven in the other, each spaced 4 elements apart in each direction. The first block will be offset by 1,1.
The most confusing thing about this API call is that three of the parameters have elements as their units, while count has blocks as its unit.
Edit: Perhaps this will make how block and count are used more obvious...
HDFS default block size is 64 mb which can be increased according to our requirements.1 mapper processes 1 block at a time.

Virtual Memory - calculate number of pages in page table

Virtual address space is 64bits
Page size is 64KB
Word size is 4bytes
How many pages are in the page table?
At first I thought:
page size = 64KB = 2^16bytes, so the offset uses 16 bits of the 64
Therefore, 48 bits left -> there are 2^48 pages in the page table
(I didn't understand where to use the info about the word size)
However, the correct answer is that there are 2^50 pages, which confuses me..
Then I thought that maybe the page offset is only 14bits because the word size is 4bytes = 2^2bytes. so there are really 2^50 pages in the pagetable.
Am I right? can I get a better explanation?
Each page uses 14 bits of the 64, not 16 as the minimum addressable unit is a 4 byte word (which effectively removes 2 bits off the number needed). So the offset has the remaining 50 bits left.

Which datatype should be used for long float values in iOS? [duplicate]

This question already has answers here:
Objective-C - How to increase the precision of a float number
(3 answers)
Closed 10 years ago.
In my application I am just dividing 50 by 3 I want to store the exact result value of this.
If I use float it gives 16.666666 and if i use double then it gives 16.666667.
Actually,I am creating three labels inside a frame by dividing the height of the frame I am deciding the height of each label. so I f i do not get exact value it creates a gap between labels.if if i pass 60 then it works fine because 60/3 results 20 but if I pass 50 then there is a gap.
If you want to make your frame divide into three equal-height areas, then the height of your frame in pixels needs to be divisible by three. You can't display fractional pixels, they are not divisible; each height as measured in pixels needs to be an integer number.
The only way to store an "exact" value would be to create a class called "Rational" (or similar) and store the numerator and denominator of the fraction as separate ivars. Floats and doubles (or any literal computer representation for that matter) cannot store rational numbers with an infinite number of decimal places or transcendental real numbers.
The way to use the "Rational" class would be to store the numerator and denominator, and then apply the appropriate maths to these values (if you wish to propagate "exactness" through the program). The slightly easier way would be to display the rational number as numerator and denominator but use the float/double approximation for the underlying mathematics.
float t= (float)50/3;
long double t1= (long double)50/3;
NSLog(#"%.30f %.30LF",t, t1);
produced output "16.666666030883789062500000000000 16.666666666666666666088425508008".
I would suggest you to go with long double which is not exact but precise enough.

My preallocation of a matrix gives out of memory error in MATLAB

I use zeros to initialize my matrix like this:
height = 352
width = 288
nFrames = 120
imgYuv=zeros([height,width,3,nFrames]);
However, when I set the value of nFrames larger than 120, MATLAB gives me an error message saying out of memory.
The original function is
[imgYuv, S, A]= changeYuv(fileName, width, height, idxFrame, nFrames)
my command is
[imgYuv,S,A]=changeYuv('tilt.yuv',352,288,1:120,120);
Can anyone please tell me what's going on here?
PS: one of the purposes of the function is to load a yuv video which consists more than 2000 frames. Is there any possibility to implement that?
There are three ways to avoid the error
Process a limited number of
frames at any given time.
Work
with integer arrays. Most movies are
in 8-bit format, while Matlab
normally works with doubles.
uint8 takes 1 byte per element,
while double takes 8 bytes. Thus,
if you create your array as B =
zeros(height,width,3,nFrames,'uint8)`,
it only uses 1/8th of the memory.
This might work for 120 frames,
though for 2000 frames, you'll run
again into trouble. Note that not
all Matlab functions work for
integer arrays; you may have to
reimplement those that require
double.
Buy more RAM.
Yes, you (or rather, your Matlab session) are running out of memory.
Get out your calculator and find the product height x width x 3 x nFrames x 8 which will tell you how much memory you have tried to get in your call to zeros. That will be a number either close to or in excess of the RAM available to Matlab on your computer.
Your command is:
[imgYuv,S,A]=changeYuv('tilt.yuv',352,288,1:120,120);
That is:
352*288*120*120 = 1459814400
That is 1.4 * 10^9. If one object has 4 bytes, then you need 6GB. That is a lot of memory...
Referencing the code I've seen in your withdrawn post, your calculating the difference between adjacent frame histograms. One option to avoid massive memory allocation might be to just hold two frames in memory, instead of reading all the frames at once.
The function B = zeros([d1 d2 d3...]) creates an multi-dimensional array with dimensions d1*d2*d3*...
Depending on width and height, given the 3rd dimension of 3 and the 4th dimension of 120 (which effectively results in width*height*360), may result in a very huge array. There are certain memory limits on every machine, maybe you reached these... ;)

Resources