How does mongoid generate BSON ObjectId - ruby-on-rails

This is with reference to this answer: https://stackoverflow.com/a/5694803/762747
So the BSON Object ID consists of:
[4 bytes seconds since epoch, 3 bytes machine hash, 2 bytes process ID, 3 bytes counter]
What logic does mongoid/moped (http://mongoid.org/) use to generate the 3 bytes counter - do they increment or generate a random number?
If effect, can we be almost certain that the BSON ObjectId generated by mongoid will be unique (and avoid point 2 in the referenced answer)?

We can be almost certain that the ObjectId will be unique. :)
When not using the native extensions (which are now included in the bson gem apparently which is used by mongoid/moped), it's using a counter.
def next(time = nil)
#mutex.lock
begin
count = #counter = (#counter + 1) % 0xFFFFFF
ensure
#mutex.unlock rescue nil
end
generate(time || ::Time.new.to_i, count)
end
It's using a counter as you can see in the Generator class.
When using the native C code for generating an ObjectId, it too is using a counter:
static VALUE rb_object_id_generator_next(int argc, VALUE* time, VALUE self)
{
char bytes[12];
unsigned long t;
unsigned short pid = htons(getpid());
if (argc == 0 || (argc == 1 && *time == Qnil)) {
t = rb_current_time_milliseconds();
}
else {
t = htonl(NUM2UINT(rb_funcall(*time, rb_intern("to_i"), 0)));
}
memcpy(&bytes, &t, 4);
memcpy(&bytes[4], rb_bson_machine_id, 3);
memcpy(&bytes[7], &pid, 2);
memcpy(&bytes[9], (unsigned char*) &rb_bson_object_id_counter, 3);
rb_bson_object_id_counter++;
return rb_str_new(bytes, 12);
}
Addendum (thanks to #ChrisHeald for the tip)
Assuming the Ruby implementation being used is using a GIL (Global Interpreter Lock) and using the C code implementation above, the C code would also be safe when it increments the counter, rb_bson_object_id_counter, as there would be an outer lock on the call to this code.

Related

writing to flash memory dspic33e

I have some questions regarding the flash memory with a dspic33ep512mu810.
I'm aware of how it should be done:
set all the register for address, latches, etc. Then do the sequence to start the write procedure or call the builtins function.
But I find that there is some small difference between what I'm experiencing and what is in the DOC.
when writing the flash in WORD mode. In the DOC it is pretty straightforward. Following is the example code in the DOC
int varWord1L = 0xXXXX;
int varWord1H = 0x00XX;
int varWord2L = 0xXXXX;
int varWord2H = 0x00XX;
int TargetWriteAddressL; // bits<15:0>
int TargetWriteAddressH; // bits<22:16>
NVMCON = 0x4001; // Set WREN and word program mode
TBLPAG = 0xFA; // write latch upper address
NVMADR = TargetWriteAddressL; // set target write address
NVMADRU = TargetWriteAddressH;
__builtin_tblwtl(0,varWord1L); // load write latches
__builtin_tblwth(0,varWord1H);
__builtin_tblwtl(0x2,varWord2L);
__builtin_tblwth(0x2,varWord2H);
__builtin_disi(5); // Disable interrupts for NVM unlock sequence
__builtin_write_NVM(); // initiate write
while(NVMCONbits.WR == 1);
But that code doesn't work depending on the address where I want to write. I found a fix to write one WORD but I can't write 2 WORD where I want. I store everything in the aux memory so the upper address(NVMADRU) is always 0x7F for me. The NVMADR is the address I can change. What I'm seeing is that if the address where I want to write modulo 4 is not 0 then I have to put my value in the 2 last latches, otherwise I have to put the value in the first latches.
If address modulo 4 is not zero, it doesn't work like the doc code(above). The value that will be at the address will be what is in the second set of latches.
I fixed it for writing only one word at a time like this:
if(Address % 4)
{
__builtin_tblwtl(0, 0xFFFF);
__builtin_tblwth(0, 0x00FF);
__builtin_tblwtl(2, ValueL);
__builtin_tblwth(2, ValueH);
}
else
{
__builtin_tblwtl(0, ValueL);
__builtin_tblwth(0, ValueH);
__builtin_tblwtl(2, 0xFFFF);
__builtin_tblwth(2, 0x00FF);
}
I want to know why I'm seeing this behavior?
2)I also want to write a full row.
That also doesn't seem to work for me and I don't know why because I'm doing what is in the DOC.
I tried a simple write row code and at the end I just read back the first 3 or 4 element that I wrote to see if it works:
NVMCON = 0x4002; //set for row programming
TBLPAG = 0x00FA; //set address for the write latches
NVMADRU = 0x007F; //upper address of the aux memory
NVMADR = 0xE7FA;
int latchoffset;
latchoffset = 0;
__builtin_tblwtl(latchoffset, 0);
__builtin_tblwth(latchoffset, 0); //current = 0, available = 1
latchoffset+=2;
__builtin_tblwtl(latchoffset, 1);
__builtin_tblwth(latchoffset, 1); //current = 0, available = 1
latchoffset+=2;
.
. all the way to 127(I know I could have done it in a loop)
.
__builtin_tblwtl(latchoffset, 127);
__builtin_tblwth(latchoffset, 127);
INTCON2bits.GIE = 0; //stop interrupt
__builtin_write_NVM();
while(NVMCONbits.WR == 1);
INTCON2bits.GIE = 1; //start interrupt
int testaddress;
testaddress = 0xE7FA;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
What I see is that the value that is stored in the address 0xE7FA is 125, in 0xE7FC is 126 and in 0xE7FE is 127. And the rest are all 0xFFFF.
Why is it taking only the last 3 latches and write them in the first 3 address?
Thanks in advance for your help people.
The dsPIC33 program memory space is treated as 24 bits wide, it is
more appropriate to think of each address of the program memory as a
lower and upper word, with the upper byte of the upper word being
unimplemented
(dsPIC33EPXXX datasheet)
There is a phantom byte every two program words.
Your code
if(Address % 4)
{
__builtin_tblwtl(0, 0xFFFF);
__builtin_tblwth(0, 0x00FF);
__builtin_tblwtl(2, ValueL);
__builtin_tblwth(2, ValueH);
}
else
{
__builtin_tblwtl(0, ValueL);
__builtin_tblwth(0, ValueH);
__builtin_tblwtl(2, 0xFFFF);
__builtin_tblwth(2, 0x00FF);
}
...will be fine for writing a bootloader if generating values from a valid Intel HEX file, but doesn't make it simple for storing data structures because the phantom byte is not taken into account.
If you create a uint32_t variable and look at the compiled HEX file, you'll notice that it in fact uses up the least significant words of two 24-bit program words. I.e. the 32-bit value is placed into a 64-bit range but only 48-bits out of the 64-bits are programmable, the others are phantom bytes (or zeros). Leaving three bytes per address modulo of 4 that are actually programmable.
What I tend to do if writing data is to keep everything 32-bit aligned and do the same as the compiler does.
Writing:
UINT32 value = ....;
:
__builtin_tblwtl(0, value.word.word_L); // least significant word of 32-bit value placed here
__builtin_tblwth(0, 0x00); // phantom byte + unused byte
__builtin_tblwtl(2, value.word.word_H); // most significant word of 32-bit value placed here
__builtin_tblwth(2, 0x00); // phantom byte + unused byte
Reading:
UINT32 *value
:
value->word.word_L = __builtin_tblrdl(offset);
value->word.word_H = __builtin_tblrdl(offset+2);
UINT32 structure:
typedef union _UINT32 {
uint32_t val32;
struct {
uint16_t word_L;
uint16_t word_H;
} word;
uint8_t bytes[4];
} UINT32;

Get the redis key-value size in memory

I'm trying to get the size of a key-value or just a key or just a value in redis.
Using debug object key command returns the serialized size of a key-value if it were to be written to disk and not the actual amount of bytes it is using in memory.
This can be confirmed by checking the source code (based from this Redis: Show database size/size for keys)
https://github.com/antirez/redis/blob/4082c38a60eedd524c78ef48c1b241105f4ddc50/src/debug.c#L337-L343
https://github.com/antirez/redis/blob/4082c38a60eedd524c78ef48c1b241105f4ddc50/src/rdb.c#L663-L671
looking at the source code:
/* Save a string object as [len][data] on disk. If the object is a string
* representation of an integer value we try to save it in a special form */
ssize_t rdbSaveRawString(rio *rdb, unsigned char *s, size_t len) {
int enclen;
ssize_t n, nwritten = 0;
/* Try integer encoding */
if (len <= 11) {
unsigned char buf[5];
if ((enclen = rdbTryIntegerEncoding((char*)s,len,buf)) > 0) {
if (rdbWriteRaw(rdb,buf,enclen) == -1) return -1;
return enclen;
}
}
/* Try LZF compression - under 20 bytes it's unable to compress even
* aaaaaaaaaaaaaaaaaa so skip it */
if (server.rdb_compression && len > 20) {
n = rdbSaveLzfStringObject(rdb,s,len);
if (n == -1) return -1;
if (n > 0) return n;
/* Return value of 0 means data can't be compressed, save the old way */
}
/* Store verbatim */
if ((n = rdbSaveLen(rdb,len)) == -1) return -1;
nwritten += n;
if (len > 0) {
if (rdbWriteRaw(rdb,s,len) == -1) return -1;
nwritten += len;
}
return nwritten;
}
And confirm through redis-cli:
127.0.0.1:6379> set a aaaaaaaaaaaaaaaaaaa
OK
127.0.0.1:6379> debug object a
Value at:0x7f985822f168 refcount:1 encoding:embstr serializedlength:20 lru:11611136 lru_seconds_idle:2
127.0.0.1:6379> set a aaaaaaaaaaaaaaaaaaaaaaaaaaaaa
OK
127.0.0.1:6379> debug object a
Value at:0x7f985827c428 refcount:1 encoding:embstr serializedlength:12 lru:11611147 lru_seconds_idle:1
All the different CLI tools are reporting the serialized size of the object and not the memory size which is the interesting and important one.
As of Redis v4, the MEMORY USAGE command does a much better job at guessing the footprint of a key and its value.

Bad Access/Pointer Issue in Objective-C

I've got the following bit of code:
Byte * tokenOut = NULL;
for (int i = 0; i < sizeof(Tknout); i++)
{
tokenOut[i * 2] = (Tknout[i] >> 8);
tokenOut[(i * 2) + 1] = (Tknout[i] & 0xFF);
}
But it generates an EXC_BAD_ACCESS error upon running when I try to update tokenOut. I'd appreciate if somebody could tell me what I'm doing wrong!
Thanks!
You are dereferencing a NULL pointer.
Try
Byte whatever;
Byte *outToken = &whatever;
// Do the rest.
Now there is valid memory for you to write to.
Except you will still have issues because you will write past the end of your variable. So maybe you wanted an array?
sizeof(outTkn) will return 1 (1 byte)

Heap corruption detected - iPhone 5S only

I am developing an app that listens for frequency/pitches, it works fine on iPhone4s, simulator and others but not iPhone 5S. This is the message I am getting:
malloc: *** error for object 0x178203a00: Heap corruption detected, free list canary is damaged
Any suggestion where should I start to dig into?
Thanks!
The iPhone 5s has an arm64/64-bit CPU. Check all the analyze compiler warnings for trying to store 64-bit pointers (and other values) into 32-bit C data types.
Also make sure all your audio code parameter passing, object messaging, and manual memory management code is thread safe, and meets all real-time requirements.
In case it helps anyone, I had exactly the same problem as described above.
The cause in my particular case was pthread_create(pthread_t* thread, ...) on ARM64 was putting the value into *thread at some time AFTER the thread was started. On OSX, ARM32 and on the simulator, it was consistently filling in this value before the start_routine was called.
If I performed a pthread_detach operation in the running thread before that value was written (even by using pthread_self() to get the current thread_t), I would end up with the heap corruption message.
I added a small loop in my thread dispatcher that waited until that value was filled in -- after which the heap errors went away. Don't forget 'volatile'!
Restructuring the code might be a better way to fix this -- it depends on your situation. (I noticed this in a unit test that I'd written, I didn't trip up on this issue on any 'real' code)
Same problem. but my case is I malloc 10Byte memory, but I try to use 20Byte. then it Heap corruption.
## -64,7 +64,7 ## char* bytesToHex(char* buf, int size) {
* be converted to two hex characters, also add an extra space for the terminating
* null byte.
* [size] is the size of the buf array */
- int len = (size * 2) + 1;
+ int len = (size * 3) + 1;
char* output = (char*)malloc(len * sizeof(char));
memset(output, 0, len);
/* pointer to the first item (0 index) of the output array */
char *ptr = &output[0];
int i;
for (i = 0; i < size; i++) {
/* "sprintf" converts each byte in the "buf" array into a 2 hex string
* characters appended with a null byte, for example 10 => "0A\0".
*
* This string would then be added to the output array starting from the
* position pointed at by "ptr". For example if "ptr" is pointing at the 0
* index then "0A\0" would be written as output[0] = '0', output[1] = 'A' and
* output[2] = '\0'.
*
* "sprintf" returns the number of chars in its output excluding the null
* byte, in our case this would be 2. So we move the "ptr" location two
* steps ahead so that the next hex string would be written at the new
* location, overriding the null byte from the previous hex string.
*
* We don't need to add a terminating null byte because it's been already
* added for us from the last hex string. */
ptr += sprintf(ptr, "%02X ", buf[i] & 0xFF);
}
return output;

how do i decode, change, then re-encode a CORBA IOR file (Visibroker) in my Java client code?

I am writing code to ingest the IOR file generated by the team responsible for the server and use it to bind my client to their object. Sounds easy, right?
For some reason a bit beyond my grasp (having to do with firewalls, DMZs, etc.), the value for the server inside the IOR file is not something we can use. We have to modify it. However, the IOR string is encoded.
What does Visibroker provide that will let me decode the IOR string, change one or more values, then re-encode it and continue on as normal?
I've already looked into IORInterceptors and URL Naming but I don't think either will do the trick.
Thanks in advance!
When you feel like you need to hack an IOR, resist the urge to do so by writing code and whatnot to mangle it to your liking. IORs are meant to be created and dictated by the server that contains the referenced objects, so the moment you start mucking around in there, you're kinda "voiding your warranty".
Instead, spend your time finding the right way to make the IOR usable in your environment by having the server use an alternative hostname when it generates them. Most ORBs offer such a feature. I don't know Visibroker's particular configuration options at all, but a quick Google search revealed this page that shows a promising value:
vbroker.se.iiop_ts.host
Specifies the host name used by this server engine.
The default value, null, means use the host name from the system.
Hope that helps.
Long time ago I wrote IorParser for GNU Classpath, the code is available. It is a normal parser written being aware about the format, should not "void a warranty" I think. IOR contains multiple tagged profiles that are encapsulated very much like XML so we could parse/modify profiles that we need and understand and leave the rest untouched.
The profile we need to parse is TAG_INTERNET_IOP. It contains version number, host, port and object key. Code that reads and writes this profile can be found in gnu.IOR class. I am sorry this is part of the system library and not a nice piece of code to copy paste here but it should not be very difficult to rip it out with a couple of dependent classes.
This question has been repeatedly asked as CORBA :: Get the client ORB address and port with use of IIOP
Use the FixIOR tool (binary) from jacORB to patch the address and port of an IOR. Download the binary (unzip it) and run:
fixior <new-address> <new-port> <ior-file>
The tool will override the content of the IOR file with the 'patched' IOR
You can use IOR Parser to check the resulting IOR and compare it to your original IOR
Use this function to change the IOR. pass stringified IOR as first argument.
void hackIOR(const char* str, char* newIOR )
{
size_t s = (str ? strlen(str) : 0);
char temp[1000];
strcpy(newIOR,"IOR:");
const char *p = str;
s = (s-4)/2; // how many octets are there in the string
p += 4;
int i;
for (i=0; i<(int)s; i++) {
int j = i*2;
char v=0;
if (p[j] >= '0' && p[j] <= '9') {
v = ((p[j] - '0') << 4);
}
else if (p[j] >= 'a' && p[j] <= 'f') {
v = ((p[j] - 'a' + 10) << 4);
}
else if (p[j] >= 'A' && p[j] <= 'F') {
v = ((p[j] - 'A' + 10) << 4);
}
else
cout <<"invalid octet"<<endl;
if (p[j+1] >= '0' && p[j+1] <= '9') {
v += (p[j+1] - '0');
}
else if (p[j+1] >= 'a' && p[j+1] <= 'f') {
v += (p[j+1] - 'a' + 10);
}
else if (p[j+1] >= 'A' && p[j+1] <= 'F') {
v += (p[j+1] - 'A' + 10);
}
else
cout <<"invalid octet"<<endl;
temp[i]=v;
}
temp[i] = 0;
// Now temp has decoded IOR string. print it.
// Replace the object ID in temp.
// Encoded it back, with following code.
int temp1,temp2;
int l,k;
for(k = 0, l = 4 ; k < s ; k++)
{
temp1=temp2=temp[k];
temp1 &= 0x0F;
temp2 = temp2 & 0xF0;
temp2 = temp2 >> 4;
if(temp2 >=0 && temp2 <=9)
{
newIOR[l++] = temp2+'0';
}
else if(temp2 >=10 && temp2 <=15)
{
newIOR[l++] = temp2+'A'-10;
}
if(temp1 >=0 && temp1 <=9)
{
newIOR[l++] = temp1+'0';
}
else if(temp1 >=10 && temp1 <=15)
{
newIOR[l++] = temp1+'A'-10;
}
}
newIOR[l] = 0;
//new IOR is present in new variable newIOR.
}
Hope this works for you.

Resources