MS Graph API: What is subscription ID max length? - microsoft-graph-api

What is MS Graph "subscription id" property max length?
In examples length of id is 36 characters (e.g. "7f105c7d-2dc5-4530-97cd-4e7ae6534c07").
It will be always like this? I can't find info in documentation.

The documentation doesn't explicitly states it is an UUID... though it certainly looks line one, probably will be one, and will most likely always be one. However, imho, unless you really have problems in terms of storage, it is best to reserve a reasonable size and assume this ID is a "opaque string" that you just store, and assume is unique (so you can make some key of it, or build an index on it, if you would be referring to a database as the storage). If there are other reasons why you need to know the side, please clarify...

Related

Keeping min/max value in a BigTable cell

I have a problem where it would be very helpful if I was able to send a ReadModifyWrite request to BigTable where it only overwrites the value if the new value is bigger/smaller than the existing value. Is this somehow possible?
Note: I thought of a hacky way where I use the timestamp as my actual value, and have the max number of versions 1, so that would keep the "latest" value which is the higher timestamp. But those timestamps would have values from 1 to 10 instead of 1.5bn. Would this work?
I looked into the existing APIs but haven't found anything that would help me do this. It seems like it is available in DynamoDB, so I guess it's reasonable to ask for BigTable to have it as well https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateItem.html#API_UpdateItem_RequestSyntax
Your timestamp approach could probably be made to work, but would interact poorly with stuff like age-based garbage collection.
I also assume you mean CheckAndMutate as opposed to ReadModifyWrite? The former lets you do conditional overwrites, the latter lets you do unconditional increments/appends. If you actually want an increment that only works if the result will be larger, just make sure you only send positive increments ;)
My suggestion, assuming your client language supports it, would be to use a CheckAndMutateRow request with a value_range_filter. This will require you to use a fixed-width encoding for your values, but that's no different than re-using the timestamp.
Example: if you want to set the value to 000768, but only if that would be an increase, use a value_range_filter from 000000 to 000767, inclusive, and do your write in the true_mutation of the CheckAndMutate.

Are UUID's, and the most basic level, just a string of unique characters?

I am currently learning about UUID in iOS, and of course I'm trying to make sense of them. From what I can gather, when you call NSUUID(), it returns a 128 bit string that is completely unique (though I'm not currently interested in how it can ensure a completely unique string, I figure it takes into account the date, time, and device identity). To make use of this string, you can append it to the end of the Document Directory (which I believe is unique to each application) to ensure a unique file path that can be used to access files later. Is this a correct understanding of the concept?
Globally Unique Identifiers are 128-bit binary strings.
Microsoft COM uses them to prevent "name collisions" between components without needing some "central naming authority" (like we have for DNS names, IP addresses, broadcast frequencies, etc etc).
GUIDs are likely to be unique ... but it's not guaranteed.
Here is a good article explaining more:
http://betterexplained.com/articles/the-quick-guide-to-guids/
And yes, your understanding of iOS NSUUIDs is exactly right:
http://nshipster.com/nstemporarydirectory/
http://nshipster.com/uuid-udid-unique-identifier/
It depends on the version of Universally unique identifier. Version 4 is almost guaranteed to be unique but not completely. Wikipedia states the following:
"Out of a total of 128 bits, two bits indicate an RFC 4122 ("Leach-Salz") UUID and four bits the version (0100 indicating "randomly generated"), so randomly generated UUIDs have 122 random bits. The chance of two such UUIDs having the same value can be calculated using probability theory (birthday paradox). Using the approximation"
Reference: https://en.wikipedia.org/wiki/Universally_unique_identifier#Version_4_.28random.29

How to determine the size of a single key in Redis

How can I know the size (in KB) of a particular key in redis?
I'm aware of info memory command, but it gives combined size of Redis instance, not for a single key.
I know this is an old question but, just for the record, Redis implemented a memory usage <key> command since version 4.0.0.
The output is the amount of bytes required to store the key in RAM.
Reference: https://redis.io/commands/memory-usage ‏
You currently (v2.8.23 & v3.0.5) can't.
The serializedlength from DEBUG OBJECT (as suggested by #Kumar) is not indicative of the value's true size in RAM - Redis employs multiple "tricks" to save on RAM on the one hand and on the other hand you also need to account for the data structure's overhead (and perhaps some of Redis' global dictionary as well).
The good news is that there has been talk on the topic in the OSS project and it is likely that in the future memory introspection will be greatly improved.
Note: I started (and stopped for the time being) a series on the topic - here's the 1st part: https://redislabs.com/blog/redis-ram-ramifications-i
DEBUG OBJECT <key> reveals something like the serializedlength of key, which was in fact something I was looking for... For a whole database you need to aggregate all values for KEYS * which shouldn't be too dfficult with a scripting language of your choice... The bad thing is that redis.io doesn't really have a lot of information about DEBUG OBJECT.
Why not try
APPEND {your-key} ""
This will append nothing to the existing value but return the current length.
If you just want to get the length of a key (string): STRLEN

How do systems typically map an 997 or 999 acknowledgement back to the originating ISA?

The implementation guides (and most web resources I can find) describe the GS06 and ST02 Control Numbers as being unique only within the Interchange they are contained in. So when we build our GS and ST segments we just start the control numbers at 1 and increment as we add more Functional Groups and/or Transaction Sets. The ISA13 control numbers we generate are always unique.
The dilemma is when we receive a 999 acknowledgment; it does not include any reference to the ISA control number that it's responding to. So we have no way to find the correct originating Functional Group in our records.
This seems like a problem that anyone receiving functional acknowledgements would face, but clearly lots of systems and companies handle it, so what is the typical practice to reconcile 997s or 999s? I think we must be missing something in our reading of the guides.
GS06 and ST02 only have to be unique within the interchange, but if you use an ID that's truly unique for each one (not just within the message), then you can skip right to the proper transaction set or functional group, not just the right message.
I typically have GS start at 1 and increment the same way that you do, but the ST02 I keep unique (to the extent allowed by the 9 character limit).
GS06 is supposed to be globally unique, not only within the interchange. This is from X12-6
In order to provide sufficient discrimination for the acknowledgment
process to operate reliably and to ensure that audit trails are
unambiguous, the combination of Functional ID Code (GS01), Application
Sender's ID (GS02), Application Receiver's ID (GS03), and Functional
Group Control Numbers (GS06, GE02) shall by themselves be unique
within a reasonably extended time frame whose boundaries shall be
defined by trading partner agreement. Because at some point it may be
necessary to reuse a sequence of control numbers, the Functional Group
Date and Time may serve as an additional discriminant only to
differentiate functional group identity over the longest possible time
frame.

Redis optimal hash set entry size

I have some questions regarding the optimal entry size setting for Redis hash sets.
In this example memory-optimization they use 100 hash entries
per key but use hash-max-zipmap-entries 256 ? Why not
hash-max-zipmap-entries 100 or 128?
On the redis website (above link) they used max hash entry size of
100, but in this post instagram, they mention 1000 entries. So
does this mean the optimal setting is a function of the product of
hash-max-zipmap-entries & hash-max-zipmap-value ?(ie in this case
Instagram has smaller hash-values than memory optimization example?)
Your comments/clarifications are much appreciated.
The key is, from here:
manipulating the compact versions of these [ziplist] structures can become slow as they grow longer
and
[as ziplists grow longer] fetching/updating individual fields of a HASH, Redis will have to decode many individual entries, and CPU caches won’t be as effective
So to your questions
This page just shows an example and I doubt the author gave much thought to the exact values. In real life, IF you wanted to take advantage of ziplists, and you knew your number of entries per hash was <100, then setting it at 100, 128 or 256 would make no difference. hash-max-zipmap-entries is only the LIMIT over which you're telling Redis to change the encoding from ziplist to hash.
There may be some truth in your "product of hash-max-zipmap-entries & hash-max-zipmap-value" idea, but I'm speculating. More importantly, first you have to define "optimal" based on what you want to do. If you want to do lots of HSET/HGETs in a large ziplist, it will be slower than if you used a hash. But if you never get/update single fields only ever do HMSET/HGETALL on a key, large ziplists wouldn't slow you down. The Instagram 1000 was THEIR optimal number based on THEIR specific data, use cases, and Redis function call frequencies.
You encouraged me to read both links and it seems that you are asking for "default value for hash table size".
I don't think that it's possible to say that one number is universal for all possibilities. The described mechanism is similar to standard hash mapping. Look at http://en.wikipedia.org/wiki/Hash_table
If you have small size of hash-table, it means that many various hash values point into the same array, where the equals method is used to find out the item.
On the other hand, large hash table means that it allocates large memory along with many empty fields. But this scales well as the algorithm uses O(1) big O notation and there is no equals searching for the item.
In general the size of the table IMHO depends on the overall count of all elements you expect to put into the table and it also depends on the diversity of the key. I mean if every hash start with "0001" not even size=100000 would help you.

Resources