RFC 3501 states that
If unique identifiers from an earlier session fail to persist in this
session, the unique identifier validity value MUST be greater than the
one used in the earlier session.
I know that "MUST" is not really negotiable, but what should a client do when it receives a UIDVALIDITY that is smaller than the one it got when it connected the last time? Should it assume that UIDs have persisted, or should it discard downloaded messages anyway?
It cannot assume that UIDs have persisted from the last UIDVALIDITY value it saw.
Suppose it sees validity 100 and UID 1000, and caches that. Later it sees validity 101 and UID 1, then it cannot use cached information about the old 1000 any more, but it can cache UID 1. If it then sees validity 100 again, then it is permitted to use the old cached information about 1000, but not the one about 1.
Rather far-fetched. In practice I don't expect anyone to bother doing this. No servers make it worthwhile. But you asked ;)
If the UIDVALIDITY changes in any way, any cached information about the folder should be purged.
Related
I understand how a distributed ledger ensures integrity using a chained linked-list data model whereby each block is chained to all its previous ones.
I also understand how in a PoW/PoS/PoET/(insert any trustless consensus mechanism here) context, the verification process makes it difficult for a malicious (or a group of) individual to tamper with a block because they would not have the necessary resources to broadcast an instance of the ledger to all network members so that they update their version to a wrong one.
My question is, if let's say some one does manage to change a block, does an integrity checking process ever happen? Is it an actual part of the verification mechanism and if so, how far in history does it go?
Is it ever necessary to verify the integrity of i.e. block number 500 out of a total of 10000 and if so, how do I do that? Do I need to start from block 10000 and verify all blocks from there until block 500?
My question is, if let's say some one does manage to change a block, does an integrity checking process ever happen?
Change the block where? If you mean change my copy of the block in my computer, how would they do that? By breaking into my computer? And how would that affect anyone else?
Is it an actual part of the verification mechanism and if so, how far in history does it go? Is it ever necessary to verify the integrity of i.e. block number 500 out of a total of 10000 and if so, how do I do that? Do I need to start from block 10000 and verify all blocks from there until block 500?
The usual rule for most blockchains is that every full node checks every single block it ever receives to ensure that it follows every single system rule for validity.
While you could re-check every block you already checked to ensure that your particular copy hasn't been tampered with, this generally serves no purpose. Someone who could tamper with your local block storage could also tamper with your local checking code. So this kind of check doesn't usually provide any additional security.
To begin with, tampering with a block is not made almost-impossible because of resource shortage for broadcasting the wrong ledger to all nodes. Broadcasting is not necessarily resource intensive. It is a chain-reaction which you only have to trigger. The challenge with tampering a block-chained block arises from the difficulty of recomputing the valid hashes (meeting the block difficulty level) of all the successive blocks (the blocks that come after the one being tampered). Because altering a block alters its hash, which in turn changes the previous hash attribute of the next block, hence invalidating its previously correct hash, and so on till the latest block. If say the latest block index is 1000. And if you tamper the 990th block. This implies that you would have to re-mine (recalculate a valid hash by randomly changing the nonce value) blocks from 990 - 1000. That in itself is very difficult to do. But say somehow you manged to do that, but by the time you broadcast your updated blockchain, there would have been other blocks (of index say 1001, 1002) mined by other miners. So yours won't be the longest valid blockchain,and therefore would be rejected.
According to this article, when a new block is broadcasted to the network, the integrity of its transaction is validated and verified against the history of transactions. The trouble with integrity check arises only if a malicious longer block-chain is broadcasted to the network. In this case, the protocol forces the nodes to accept this longest chain. Note that, this longest chain maintains its own integrity and is verified of its own. But it is verified against its own version of truth. Mind you though, this can only be possible if the attacker has a hashing power that is at least 51% of the network's hashing power. And this kind of power is assumed practically impossible. https://medium.com/coinmonks/what-is-a-51-attack-or-double-spend-attack-aa108db63474
We're planning on saving the twilio call ids on our side for specific reasons. For efficiency, it would better for us to store them as uuid instead of varchar.
From our tests, the call ids we are getting are 34 characters long and start with 'CA'. We want to know if this is always the case.
Is it safe for us to store the 32-character id (without first two characters) in our database as uuid? Will this be unique?
So, I contacted Twilio support since we were really avoiding storing the ids as varchar. And as mentioned, there's nothing in the documentation about the id.
Turns out they say it's safe to store the last 32 characters. So we'd be able to save it as uuid and it'd be more efficient when saving, fetching records.
Haven't found any info in the docs so my sugestion is. DON'T DO IT.
If it's not specified in the docs that the "accountsid" is unique without the letter (CA), then they are free to not have them unique.
Maybe it works, maybe it keeps working, but one they it may not work and you are going to spend lots of resources hunting down the bug. Then you will regret not saving the full 34 characters long line just to save a few bytes.
So IMAP has a feature where, once I've looked at a mailbox, I can efficiently look for any new messages by asking it for any new UIDs which I haven't seen yet.
But I how do I efficiently find expunged messages? I haven't found any commands for doing this yet; my only option is to retrieve the full UID list of the mailbox and looking for missing ones. This is rather suboptimal.
I have mailboxes with 25000+ messages. Scanning one of these mailboxes takes megabytes of traffic just to do the UID SEARCH command, so I'd like to avoid this. Is there anything hidden in the depths of the IMAP protocol that I'm missing?
Okay, for offline use this can work:
Since big mailboxes typically grow big by having many messages added and few deleted, you can optimise for there being large unchanged parts of the messages. If your clientside store contains 10000 messages, then you can send 'x UID SEARCH 2000,4000,6000,8000,10000` which will return five UIDs in one SEARCH response. Depending on which ones have changed, you know whether there have been any expunges in each fifth of the mailbox, so if the mailbox hasn't changed you've verified that very cheaply. If a particular fifth has changed, you can retrieve all 2000 UIDs in that.
QRESYNC/CONDSTORE is much nicer though, and also let you resync flag state.
The only effective answer I know to this is to change the problem.
Instead of learning which of the 25000 messages have been deleted, the client can learn which of the messages in its cache have been deleted and which ones still exist, and that can be done fairly efficiently. One approach is to keep a per-message flag in the client, "this message has been observed to exist in this IMAP connection" and when you access a cached message for which the flag isn't set, you send "x UID SEARCH UID y", which will return the message's UID if the message exists and an empty result if not. QRESYNC's SELECT arguments provide a natural improvement to this technique.
The problems, of course, are that with a non-QRESYNC server the client will cache messages for longer than necessary, and that depending on how you implement it the client might flicker or have unpleasant delays.
The question is merely theoreticaly, really a rare use case, I'm just wondering about.
I have a client-server setup (CoreData, JSON, AFNetworking, etc.) where every account have multiply read-write users.
Therefore I timestamp every database entry with a _lastModificationDate to be able to sync / merge changes later.
So one of the users go offline, sets it's date to 2030 (with Set Automatically off). Save - so timestamp with 2030 - some entity, then go online.
20 years passed, user did not change a single thing. Still, every sync of this user will overwrite data in database until we pass 2030.
How should I get over this?
You could either set a new field modifiedOffline (boolean), or, while he's offline, set _lastModificationDate +1 on each modification. However, in both situations, the updates from the server will/may overwrite the local ones, even if they're newer.
Another solution would be: after the user gets online, the app would refresh the _lastModificationDate to be at maximum current date (which I presume you fetch it from the server). This way, newer modifications will work normally.
The server database could have a sanity check. Modification dates in the future get set to the current date when first encountered. Maybe also dates pre-dating the iPhone ;-).
This is my very first question and I hope it's well explained and so I can find an answer.
I work at the website project for a delivery company that has all the data in an Oracle9i server.
Most of the web user just want to know when they're going to get their package but I'm sure there's also robots that query that info several times a day to update their systems.
I'm working on a code to stop those robots (asking for a captcha after the 3rd query in 15min, for example) because we have some web services they can use to query all the data in bulk.
Now, my problem is that peak hours 12.00-14.00 the database starts to answer very slowly.
Here is some data I've parsed from the web application. I don't have logs at this level for the web services, but there was also a lot of queries there.
It shows the timestamp when I request a connection from the datasource, the Integer.toHexString(connection.hashCode()), the name of the datasource, the timestamp when I close the connection and the difference between both timestamps.
Most of the time the queries end in less than a second but yesterday I had this strange delay for more than 2minutes.
Is there some kind of maximun number of connections allowed on the database so when it surpass that limit the database queues my query for sometime before trying again?
Thanks in advance.
Is there some kind of maximun number of connections allowed on the databas
Yes.
SESSIONS is one of the basic initialization parameters and
specifies the maximum number of sessions that can be created in the
system. Because every login requires a session, this parameter
effectively determines the maximum number of concurrent users in the
system.
The default value is derived from the PROCESSES parameter (1.5 times this plus 22); therefore if you didn't change the PROCESSES parameter (default 100) the maximum number of sessions to your database will be 172.
You can determine the value by querying V$PARAMETER:
SQL> select value
2 from v$parameter
3 where name = 'sessions';
VALUE
--------------------------------
480
so when it surpass that limit the database queues my query for sometime before trying again?
No.
When you attempt to exceed the value of the SESSIONS parameter the exception ORA-00018: maximum number of sessions exceeded will be raised.
Something may well be queuing your query but it will be within your own code and not specified by Oracle.
It sounds as though you should find out more information. If not at the maximum number of sessions then you need to capture the query that's taking a long time and profile it; this would, I think, be the more likely scenario. If you're at the maximum number of sessions then you need to look at your (companies) code to determine what's happening.
You haven't really explained anything about your application but it sounds as though you're opening a session (or more) per user. You might want to reconsider whether this is the correct approach.
Thanks for the edit vape.
I've also found the real problem.
I had the method that asks for a connection to the datasource synchronized and it caused locks while requesting connections at peak hours. I've had it removed and everything is working fine.