How do maxKeyCount and addPositiveDiagnosisKeys interact? - ios

I'm learning the new Contact Tracing API that Apple is releasing for iOS (in partnership with Google).
I'm not gasping the relationship of the maxKeyCount property on the CTExposureDetectionSession, and its relationship with the addPositiveDiagnosisKeys: completion: method.
What I understand
A CTExposureDetectionSession is the object that allows an app to ask the framework to start trying to match a list of published Diagnosis Keys against the local database of captured Rolling Proximity Identifiers.
The app would start by calling the activateWithCompletion: method on a new session, and then call addPositiveDiagnosisKeys: one or more times, to eventually inform the framework that no more keys are to be added by calling finishedPositiveDiagnosisKeysWithCompletion:.
That last call will also indicate the block to run upon detection completion, which will be called with a CTExposureDetectionSummary object informing the amount of Diagnosis Keys that the device has been exposed to.
What I do not understand
The maxKeyCount property doc says:
This property contains the maximum number of keys to provide to this API at once. This property’s value updates after each operation complete and before the completion handler is invoked. Use this property to throttle key downloads to avoid excessive buffering of keys in memory.
But the addPositiveDiagnosisKeys: method says:
Asynchronously adds the specified keys to the session to allow them to be checked for exposure. Each call to this method must include more keys than specified by the current value of <maxKeyCount>.
maxKeyCount seems to be a maximum, but the addPositiveDiagnosisKeys: requires me to call it with more keys than the maximum.
Am I expected to call the method with a superlist of the previously sent list? That doesn't seem to fit well with the "avoid excessive buffering of keys in memory" part - if I have to use an ever-growing list of keys.
And what does the This property’s value updates after each operation complete part?

The documentation of maxKeyCount is missing a not.
The Android Contact Tracing API documentation has an analogous interface:
/**
* Provides a list of diagnosis keys for contact checking. The keys are to be
* provided by a centralized service (e.g. synced from the server).
*
* When invoked after the requestProvideDiagnosisKeys callback, this triggers a * recalculation of contact status which can be obtained via hasContact()
* after the calculation has finished. *
* Should be called with a maximum of N keys at a time. */
Task<Status> provideDiagnosisKeys(List<DailyTracingKey> keys);
/**
* The maximum number of keys to pass into provideDiagnosisKeys at any given * time.
*/
int getMaxDiagnosisKeys();
As Paulw11 suggested in a comment, the maxKeyCount property seems to be a value intended for reading that stats how many Diagnosis Keys are to be sent to the API in a single call for the matching to be performed.
Callers should re-check the value before each call, since it may get updated after each call.
The fixed documentation of addPositiveDiagnosisKeys: should, then, read:
Each call to this method must NOT include more keys than specified by the current value of <maxKeyCount>.

Related

Processing Total Ordering of Events By Key using Apache Beam

Problem Context
I am trying to generate a total (linear) order of event items per key from a real-time stream where the order is event time (derived from the event payload).
Approach
I had attempted to implement this using streaming as follows:
1) Set up a non overlapping sequential windows, e.g. duration 5 minutes
2) Establish an allowed lateness - it is fine to discard late events
3) Set accumulation mode to retain all fired panes
4) Use the "AfterwaterMark" trigger
5) When handling a triggered pane, only consider the pane if it is the final one
6) Use GroupBy.perKey to ensure all events in this window for this key will be processed as a unit on a single resource
While this approach ensures linear order for each key within a given window, it does not make that guarantee across multiple windows, e.g. there could be a window of events for the key which occurs after that is being processed at the same time as the earlier window, this could easily happen if the first window failed and had to be retried.
I'm considering adapting this approach where the realtime stream can first be processed so that it partitions the events by key and writes them to files named by their window range.
Due to the parallel nature of beam processing, these files will also be generated out of order.
A single process coordinator could then submit these files sequentially to a batch pipeline - only submitting the next one when it has received the previous file and that downstream processing of it has completed successfully.
The problem is that Apache Beam will only fire a pane if there was at least one time element in that time window. Thus if there are gaps in events then there could be gaps in the files that are generated - i.e. missing files. The problem with having missing files is that the coordinating batch processor cannot make the distinction between knowing whether the time window has passed with no data or if there has been a failure in which case it cannot proceed until the file finally arrives.
One way to force the event windows to trigger might be to somehow add dummy events to the stream for each partition and time window. However, this is tricky to do...if there are large gaps in the time sequence then if these dummy events occur surrounded by events much later then they will be discarded as being late.
Are there other approaches to ensuring there is a trigger for every possible event window, even if that results in outputting empty files?
Is generating a total ordering by key from a realtime stream a tractable problem with Apache Beam? Is there another approach I should be considering?
Depending on your definition of tractable, it is certainly possible to totally order a stream per key by event timestamp in Apache Beam.
Here are the considerations behind the design:
Apache Beam does not guarantee in-order transport, so there is no use within a pipeline. So I will assume you are doing this so you can write to an external system with only the capability to handle things if they come in order.
If an event has timestamp t, you can never be certain no earlier event will arrive unless you wait until t is droppable.
So here's how we'll do it:
We'll write a ParDo that uses state and timers (blog post still under review) in the global window. This makes it a per-key workflow.
We'll buffer elements in state when they arrive. So your allowed lateness affects how efficient of a data structure you need. What you need is a heap to peek and pop the minimum timestamp and element; there's no built-in heap state so I'll just write it as a ValueState.
We'll set a event time timer to receive a call back when an element's timestamp can no longer be contradicted.
I'm going to assume a custom EventHeap data structure for brevity. In practice, you'd want to break this up into multiple state cells to minimize the data transfered. A heap might be a reasonable addition to primitive types of state.
I will also assume that all the coders we need are already registered and focus on the state and timers logic.
new DoFn<KV<K, Event>, Void>() {
#StateId("heap")
private final StateSpec<ValueState<EventHeap>> heapSpec = StateSpecs.value();
#TimerId("next")
private final TimerSpec nextTimerSpec = TimerSpec.timer(TimeDomain.EVENT_TIME);
#ProcessElement
public void process(
ProcessContext ctx,
#StateId("heap") ValueState<EventHeap> heapState,
#TimerId("next") Timer nextTimer) {
EventHeap heap = firstNonNull(
heapState.read(),
EventHeap.createForKey(ctx.element().getKey()));
heap.add(ctx.element().getValue());
// When the watermark reaches this time, no more elements
// can show up that have earlier timestamps
nextTimer.set(heap.nextTimestamp().plus(allowedLateness);
}
#OnTimer("next")
public void onNextTimestamp(
OnTimerContext ctx,
#StateId("heap") ValueState<EventHeap> heapState,
#TimerId("next") Timer nextTimer) {
EventHeap heap = heapState.read();
// If the timer at time t was delivered the watermark must
// be strictly greater than t
while (!heap.nextTimestamp().isAfter(ctx.timestamp())) {
writeToExternalSystem(heap.pop());
}
nextTimer.set(heap.nextTimestamp().plus(allowedLateness);
}
}
This should hopefully get you started on the way towards whatever your underlying use case is.

Making multiple entries with the same key name in Hyperledger

I was trying to write a smart contract that would use the same key name(the name of the person whose details I am storing) multiple times and want all of the entries made for that key-name to be output when querying for the name.
Is it possible on to do so on hyperledger? If yes, then how would you write the query function If not could you recommend an alternate method to achieve the same result?
I am new to hyperledger and have no idea how to proceed considering I didnt see any examples of chaincode similar to this.
What you need to do is to encode the value into JSON format and store it marshaled for the given key, for example you can define a struct with a slice, update/append slice each time with new value, marshal to byte array and then save into ledger.
Each update you read the byte array from the ledger unmarshal it back to struct, update with required information and save back with same key.
To retrieve history of changes you can use one of the methods from ChaincodeStubInterface
// Chaincode interface must be implemented by all chaincodes. The fabric runs
// the transactions by calling these functions as specified.
type ChaincodeStubInterface interface {
// Other methods omitted
// GetHistoryForKey returns a history of key values across time.
// For each historic key update, the historic value and associated
// transaction id and timestamp are returned. The timestamp is the
// timestamp provided by the client in the proposal header.
// GetHistoryForKey requires peer configuration
// core.ledger.history.enableHistoryDatabase to be true.
// The query is NOT re-executed during validation phase, phantom reads are
// not detected. That is, other committed transactions may have updated
// the key concurrently, impacting the result set, and this would not be
// detected at validation/commit time. Applications susceptible to this
// should therefore not use GetHistoryForKey as part of transactions that
// update ledger, and should limit use to read-only chaincode operations.
GetHistoryForKey(key string) (HistoryQueryIteratorInterface, error)
}

Swift: How do I store an array of object by reference from completion handler (closure)?

There's a API callback returns a Json format result. The requirement in short is that I need to keep calling this API and keeps implement Breadth First Search on the results it returns.
Imaging it's a map with many nodes and connections. Every time I call API call for a node, it gonna return me list of its connected nodes. All I need now, is an array, which saves all the node that has been visited to avoid repeated visits.
But this is Swift and I am new to it. I was using Array and pass as inout inside the completion handler. But there's an error: escaping closures can only capture inout parameters explicitly by value which means I cannot do it like this.
Now you may ask why the array I have has to be stored by reference. Because the API call is async, which means I have to wait until it comes back to keep progressing Breadth First Search, means I have to pass this array by reference in order to do the recursion.
What other solutions I may have?
Swift Arrays are value types (not reference types) so you will need to store you array in an object. You can then pass the object to your handler and set the array content inside the object which is carried as a reference in the closures.

How do I filter Purchase Order query in QBXML to only return records that are not fully received?

When doing a PurchaseOrderQuery in QBXML I am trying to get Quickbooks to only return purchase orders that are not yet processed (i.e. "IsFullyReceived" == false). The response object contains the IsFullyReceived flag, but the query object doesn't seem to have a filter for it??
This means I have to get every single Purchase Order whether or not it's received, then do the filtering logic in my application - which slows down Web Connector transactions.
Any ideas?
Thanks!
You can't.
The response object contains the IsFullyReceived flag, but the query object doesn't seem to have a filter for it??
Correct, there is no filter for it.
You can see this in the docs:
https://developer-static.intuit.com/qbSDK-current/Common/newOSR/index.html
This means I have to get every single Purchase Order whether or not it's received, then do the filtering logic in my application - which slows down Web Connector transactions.
Yep, probably.
Any ideas?
Try querying for only Purchase Orders changed or modified (ModifiedDateRangeFilter) since the last time you synced.
Or, instead of pulling every single PO, keep track of a list of POs that you think may not have been received yet, and then only query for those specific POs based on RefNumber.
Or, watch the ItemReceipt and BillPayment objects, and use that to implement logic about which POs may have been recently filled, since BillPayment andItemReceipt` objects should get created as the PO is fulfilled/received.

Cancelling NSJSONSerialization - Search as you type, requests overlapping

Similar to the iPhone Facebook app search function, I am implementing search as you type functionality into my application although I have a problem when decoding the data into JSON format.
Basically what happens is because some searches take longer than others, they return at different intervals and this causes some small visual issues when the data is presenting on the screen.
I have set an NSLOG after each decode using NSJSONSerialization for the keyword 'industry'
2013-04-09 23:38:18.941 Project Name [42836:1d03] http://fooWebAddress/json/?method=search&limit=10&q=indus
2013-04-09 23:38:19.776 Project Name [42836:3e07] http://fooWebAddress/json/?method=search&limit=10&q=indu
2013-04-09 23:38:20.352 Project Name [42836:8803] http://fooWebAddress/json/?method=search&limit=10&q=indust
2013-04-09 23:38:21.814 Project Name [42836:4e03] http://fooWebAddress/json/?method=search&limit=10&q=industr
2013-04-09 23:38:23.434 Project Name [42836:8803] http://fooWebAddress/json/?method=search&limit=10&q=ind
2013-04-09 23:38:24.070 Project Name [42836:7503] http://fooWebAddress/json/?method=search&limit=10&q=industry
As you can see it is all out of order.
Does anyone have any way of stopping NSJSONSerialization for the previous connection.
Or possibly any other way to go about this problem?
Steps up to NSJSONSerialization...
NSURLRequest (initwithURL)
NSOperationQueue
NSURLConnection (asynchronous)
NSJSONSerialization
Thanks in advance.
When the user starts typing more text, you could cancel your previous connections and ignore any further delegate callbacks you receive from them. Then make the new request for the current text.
You can do this by maintaining some sort of lastRequest or lastOperation reference. When the user starts typing, call [self.lastRequestOrOperation cancel] and ignore any further notifications from that request with a check like if (request != self.lastRequest) { return; } in whatever callbacks you have.
However this has the problem that if the user keeps typing for a while you are constantly cancelling requests and they may not see any results until they have stopped typing.
A better solution would be to add sequencing so that each request is associated with an increasing sequence ID. You then only parse the result and update the UI when the sequence of the response is higher than the last one you received. If you receive any out-of-band responses from earlier, you just ignore them.
This is a much more complex issue than just being able to cancel the NSJSONSerialization. My suggestion is to use NSFetchedResultsController to populate your table view that shows the search results. Use the search term as one of the predicate variable in the NSFetchRequest attached to NSFetchedResultsController. And then, when you parse the results using NSJSONSerialization, store the results with the search term associated with that request. As soon as the search term changed (which you can detect when the user types more characters), re-create the NSFetchedResultsController and reload your table view. In addition, you can also try to cancel the call to parse the previous results if you launched it using performSelector:withObject:afterDelay. Beware that this cannot be always relied upon as the call may have been initiated by the time you are trying to cancel.
Kinda basic, but you could always maintain an nsdictionary of sub-classed NSURLRequests (sub-classed to provide a tag).
Start request - add request to dicationary with tag = array.count - 1, with key matching tag
Connection returns - is the request the most recent request, if so, parse json
Parse JSON - is the request the most recent request, if so, show results, if not, only display if there are no previous results displayed
Request handling - remove key from dictionary
most recent request = does the dictionary contain an object with a higher key value
Currently what you are doing is, you type each character and calling web-service. Why to call web-service for each letter you type. If user is type continuously, then it will increase the load, so call the web-service only when user stops for a particular interval of time. and then pass that string to call web-service or what ever method you are calling.
[NSObject cancelPerformSelectorsWithTarget:self]; // This will cancel your all req which is going to make when user typing without stopping
[self performSelector:#selector(sendSearchRequest) withObject:searchText afterDelay:0.1f]; // This will pass the string to call a web-service method, on which user hold for some time.

Resources