What does "integer at offset 60" mean in SQLite documentation for the `user_version? pragma` - ios

In the SQLite documentation for the user_version pragma it says:
The user_version pragma will to get or set the value of the user-version integer at offset 60 in the database header.
What does it mean by "integer at offset 60"? My intention was to use this variable to store my application's schema version. Is this variable formatted as a 32 bit unsigned integer?

"integer at offset 60" is reffering to the byte offset in the SQLite database's header. So if you were to get all the bytes in the database's header, and then move to byte number 60, that would be the beginning of the user_version pragma. And because it is 4 bytes in size (32 bits), you know that it will occupy bytes 60-63.
For reference here is the documentation for the database header. If you scroll down to the row where the value in the 'Offset' column is 60, you'll see they indicate this space is dedicated for the user_version pragma.

Related

In Core Data which attribute type (Integer 16 / 32 / 64) should I use to store small numbers? [duplicate]

I want to keep NSUInteger into my core data and I don't know which type should I use (integer 16, 32, 64) to suit the space needed.
From my understanding:
Integer 16 can have minimum value of -32,768 to 32,767
Integer 32 can have minimum value of -2,147,483,648 to 2,147,483,647
Integer 64 can have minimum value of -very large to very large
and NSUInteger is type def of unsigned long which equal to unsigned int (Types in objective-c on iPhone)
so If I convert my NSUInteger to NSNumber with numberWithUnsignedInteger: and save it as NSNumber(Integer 32) I could retrieve my data back safely right?
Do you really need the entire range of an NSUInteger? On iOS that's an unsigned 32 bit value, which can get very large. It will find into a signed 64 bit.
But you probably don't need that much precision anyway. The maximum for a uint32_t is UINT32_MAX which is 4,294,967,295 (4 billion). If you increment once a second, it'll take you more than 136 years to reach that value. Your user's iPhone won't be around by then... :)
If at all possible, when writing data to disk or across a network, it's best to be explicit about the size of value. Instead of using NSUInteger as the datatype, use uint16_t, uint32_t, or uint64_t depending on the range you need. This then naturally translates to Integer 16, 32, and 64 in Core Data.
To understand why, consider this scenario:
You opt to use Integer 64 type to store your value.
On a 64-bit iOS device (eg iPhone 6) it stores the value 5,000,000,000.
On a 32-bit iOS device this value is fetched from the store into an NSUInteger (using NSNumber's unsignedIntegerValue).
Now because NSUInteger is only 32-bits on the 32-bit device, the number is no longer 5,000,000,000 because there aren't enough bits to represent 5 billion. If you had swapped the NUInteger in step 3 for uint64_t then the value would still be 5 billion.
If you absolutely must use NSUInteger, then you'll just need to be wary about the issues described above and code defensively for it.
As far as storing unsigned values into the seemingly signed Core Data types, you can safely store them and retrieve them:
NSManagedObject *object = // create object
object.valueNumber = #(4000000000); // Store 4 billion in an Integer 32 Core Data type
[managedObjectContext save:NULL] // Save value to store
// Later on
NSManagedObject *object = // fetch object from store
uint32_t value = object.valueNumber.unsignedIntegerValue; // value will be 4 billion

RedBeanPHP: UTF8 vs UTF8MB4 issue - Specified key was too long; max key length is 767 bytes

My problem is described here: https://groups.google.com/d/msg/redbeanorm/z8SD3qeMEM4/eROS7wBBtccJ
But I can't reply to the post. It just gets deleted every time. In any case, I'm hoping someone here has run across this.
Precondition: No table named 'my_table' exists
Code:
$bean = R::dispense('my_table');
$bean->setMeta("buildcommand.unique", array(array('barcode')));
$bean->barcode = '000000000000000000';
$bean->date_created = gmdate('Y-m-d H:i:s');
R::store($bean);
Postcondition:
"SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 767 bytes"

Verify that a '*.map' file match a Delphi application

For my program delphi-code-coverage-wizard, I need to verify that a (detailed) mapping file .map matches a Delphi application .exe
Of course, this verification should be realized with Delphi.
Is there a way to check it ? Maybe by verifying some information from the EXE ?
I think a quite simple heuristic would be to check that the various sections in the PE file start and finish at the same place:
For example, here's the top of a map file.
Start Length Name Class
0001:00401000 000A4938H .text CODE
0002:004A6000 00000C9CH .itext ICODE
0003:004A7000 000022B8H .data DATA
0004:004AA000 000052ACH .bss BSS
0005:00000000 0000003CH .tls TLS
I also looked at what dumpbin /headers had to say about these sections:
SECTION HEADER #1
.text name
A4938 virtual size
1000 virtual address (00401000 to 004A5937)
A4A00 size of raw data
400 file pointer to raw data (00000400 to 000A4DFF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
60000020 flags
Code
Execute Read
SECTION HEADER #2
.itext name
C9C virtual size
A6000 virtual address (004A6000 to 004A6C9B)
E00 size of raw data
A4E00 file pointer to raw data (000A4E00 to 000A5BFF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
60000020 flags
Code
Execute Read
...truncated
Look at the .text section. According to dumpbin it starts at 00401000 and finishes at 004A5937 which is a length of 000A4938, exactly as in the .map file. Naturally you'd read the PE file directly rather than running dumpbin, but this illustrates the point.
I'd expect a vanishingly small number of false positives with this approach.

TCP/IP Client / Server commands data

I have a Client/Server architecture (C# .Net 4.0) that send's command packets of data as byte arrays. There is a variable number of parameters in any command, and each paramater is of variable length. Because of this I use delimiters for the end of a parameter and the command as a whole. The operand is always 2 bytes and both types of delimiter are 1 byte. The last parameter_delmiter is redundant as command_delmiter provides the same functionality.
The command structure is as follow:
FIELD SIZE(BYTES)
operand 2
parameter1 x
parameter_delmiter 1
parameter2 x
parameter_delmiter 1
parameterN x
.............
.............
command_delmiter 1
Parameters are sourced from many different types, ie, ints, strings etc all encoded into byte arrays.
The problem I have is that sometimes parameters when encoded into byte arrays contain bytes that are the same value as a delimiter. For example command_delmiter=255.. and a paramater may have that byte inside of it.
There is 3 ways I can think of fixing this:
1) Encode the parameters differently so that they can never be the same value as a delimiter (255 and 254) Modulus?. This will mean that paramaters will become larger, ie Int16 will be more than 2 bytes etc.
2) Do not use delimiters at all, use count and length values at the start of the command structure.
3) Use something else.
To my knowledge, the way TCP/IP buffers work is that SOME SORT of delimiter has to be used to seperate 'commands' or 'bundles of data' as a buffer may contain multiple commands, or a command may span multiple buffers.. So this
BinaryReader / Writer seems like an obvious candidate, the only issue is that the byte array may contain multiple commands ( with parameters inside). So the byte array would still have to be chopped up in order to feel into the BinaryReader.
Suggestions?
Thanks.
The standard way to do this is to have the length of the message in the (fixed) first few bytes of a message. So you could have the first 4 bytes to denote the length of a message, read those many bytes for the content of the message. The next 4 bytes would be the length of the next message. A length of 0 could indicate end of messages. Or you could use a header with a message count.
Also, remember TCP is a byte stream, so don't expect a complete message to be available every time you read data from a socket. You could receive an arbitrary number of bytes at ever read.

Delta row compression in PCLXL

Is there a difference in the implementation of delta row compression between PCLXL and PCL5?
I was using Delta Row compression in PCL5, but when I used the same method in PCLXL, the file is not valid. I checked the output using EscapeE and it says that the image data size is incorrect..
Could anyone point me to some material explaining how delta row compression is implemented in PCLXL?
Thanks,
kreb
Hmm, I found this and it is indeed different..
from http://www.tek-tips.com/viewthread.cfm?qid=1577259&page=1 user guptadeepak03
Actually I did some research on that too. I found it hard way that there are few differences in the way the formats are in PCL-XL and PCL-5. To quote from the reference manual provided by HP(PCL-XL ver 2.1):
The PCL XL implementation follows the
PCL5 implementation except in the
following:
1) the seed row is
initialized to zeroes and contains the
number of bytes defined by SourceWidth
in the BeginImage operator.
2) the delta row is preceded by a 2-byte byte
count which indicates the number of
bytes to follow for the delta row. The
byte count is expected to be in LSB
MSB order.
3) to repeat the last row, use the 2-byte byte count of 00 00.
Will mark this answered as soon as I can.. Thanks..

Resources