In the FHIR query spec it states that the _include parameter can be added to the query URL to request that specified referenced resources are returned in full to prevent further network requests to retrieve these resources.
i.e.
diagnosticreport/search?_include=DiagnosticReport.subject&_include=Patient.provider
This construct requires that you know the resources that are going to be referenced in the result set in advance of the query being made. I suspect for resources such as "Observation" where there will potentially be many profiles with each profile potentially having different extensions, this will not be the case.
Is it feasible to have a syntax whereby all of the referenced resources are "included"?
This page: https://www.hl7.org/implement/standards/FHIR/search.html#return
describes the following:
2.2.4.1 Include Paths
Include paths may include wild cards, such as MedicationDispense.results.*, or even _include=*, though both clients and servers need to take care not to request or return too many resources when doing this. Most notably, re-applying inclusion paths over newly included resources might lead to cycles or the retrieval of the full patient's file: resources are organized into an interlinked network and broad _include paths may eventually traverse all possible paths on the server. For servers, these recursive and wildcard _includes are demanding and may slow the search response time significantly. Servers are expected to limit the number of iterations done and are not obliged to honor requests to include additional resources in the search results.
Umm, that might be possible. Though there's a the risk that you'd get a swag of resources you had no idea why you were getting. And the server might be more inclined to reject that kind of request. It's certainly a lot slower for a server - it has to evaluate a lot more content to decide what references to include or not.
Related
Reading Workbox strategies documentation, and I found I can't imagine the situation using "cache-first" strategies in Workbox.
There is "Stale-While-Revalidate" strategies which uses cache first, and in background, updates cache file through Network. If you change the target file, it is useful because when access next time, the App uses latest file which last time cached. If you don't have any changes, there is no disadvantage, I think.
What is the main purpose using "cache-first" strategies in Workbox?
Thanks in advance.
(This answer isn't specific to Workbox, though Workbox makes it easier to use these strategies, instead of "rolling them by hand.")
I'd recommend using a cache-first strategy when you're sure that the content at a given URL won't change. If you're confident of that, then the extra network request made in the stale-while-revalidate strategy is just a waste—why bother with that overhead?
The scenarios in which you should have the highest confidence that the content at a URL won't change is if the URL contains some explicit versioning information (e.g. https://example.com/librbaries/v1.0.0/index.js) or a hash of the underlying contents (e.g. https://example.com/libraries/index.abcd1234.js).
It can sometimes make sense to use a cache-first strategy when a resource might update, but your users are not likely to "care" about the update, and the cost of retrieving that the update is high. For example, you could argue that using a cache-first strategy for the images used in the body of an article is a reasonable tradeoff, even if the image URLs aren't explicitly versioned. Seeing an out of date image might not be the worst thing, and you would not force your users to download a potentially large image during the revalidation step.
In OneDrive Business account I have shared files and folders and I'm trying to get a list of emails/users with whom the items are shared.
Both
https://graph.microsoft.com/v1.0/me/drive/sharedWithMe
and
https://graph.microsoft.com/v1.0/me/drive/root/children
both produce a similar result. I get the list of files, but the property Permissions is never present. All I see is whether the items are shared, but not with whom.
Now, I'm aware of /drive/items/{fileId}/permissions, but this would mean checking the files one-by-one. My app deals with a lot of files and I would really appreciate a way to get hose permissions in bulk...
Is there such an option?
/sharedWithMe is actually the opposite of what you're looking for. These are not files you've shared with others but rather files others have shared with you.
As for your specific scenario, permissions is unfortunately not supported in a collection. In other words, it isn't possible to $epand=permissions on the /children collection. Each file needs to be inspected separately.
You can however reduce the number of files you need to inspect by looking at the shared property. For example, if the scope property is set to user you know this file was shared with a specific user. If the shared property is null, you know this file is only available to the current user.
You can also reduce the number of calls you're making by using JSON Batching. After constructing a list of shared files you want to check, you can use Batching to process them in blocks of 20. This should greatly reduce the amount of overhead and dramatically improve the overall performance.
_api/web/onedriveshareditems?$top=100&$expand=SpItemUrl might just do the trick. This is the URL that is used by the web interface of OneDrive. Hope it helps
Is it possible to get multiple paths or URLs back from either:
-[NSFileManager URLsForDirectory:inDomains]
or:
NSSearchPathForDirectoriesInDomains
if the domain is NSDocumentDirectory? The API doesn't seem to make any guarantees about how many options will be returned, but almost all uses seem to simply take the resulting NSArray and call firstObject. That's conceptually simpler than, for instance, iterating across the resulting options, but it makes me wonder if these examples are oversimplified. Would these examples be better off imagining that there may be multiple return values, or would that be over-engineering?
As you've stated, the documentation of the API implies that more than one directory may be returned (even for the documents directory). IIRC as of iOS 9.0 the API will always return a single directory for documents though it's always better to plan ahead and use the API explicitly as intended (simply iterate over all of the results in the order they are received).
The point of concatenation is to improve performance by having just one file to download, but that means that every time you change a bit of your own javascript, the whole package is recompiled and fingerprinted - including large libraries like jQuery that haven't changed, and would have been cached if they were downloadable separately, but now jQuery is going to be redownloaded each time as part of your unified application.js.
What am I missing here? Wouldn't the best approach be to create two manifests - one for your own files (which are small and change frequently), and one for libraries (which are large and change infrequently)?
I will give it a try, with some speculation inside it ...
First, JQuery is provided by Rails itself, and depending on your layout, it will come from a CDN. So lets look at the libraries that may change over time. What are the scenarios here?
A user is visiting the web site for the first time. His browser (depending on the type) has to load all Javascript files before he can show something that comes below that (therefore, move it to the end). Depending on the browser, it may load 2, 4, 6 or 8 resources at one time, if your site consists of dozens or even hundreds of them, this will slow the presentation then.
A user is visiting the web site (this page) the second time. Normally on the same day, hour or even minute. The whole thing will be cached, there is only one request, that the cached thing can be used, pretty fast then. If all resources (hundreds) would be loaded one after another, there will be hundreds of requests if the cache is valid.
A user is visiting the web site the second time, and there was some time in between (lets say 15 days). Only 1 resource was changed, all other could be cached and reused. How probable is that?
A user (the developer) is visiting his work during development. No asset pipeline is used, no caching, because every change should be noticed immediately.
So I think, from a web site view, only the scenario 3 may be (a little bit) slower, and it is the most improbable one. Normally, the overhead of many, many requests is much more relevant than the size of the requests.
If you have the time, just try with a tool that displays it the loading time of all resources. There may be edge cases that one resource will change often, and should therefore not included in the asset pipeline, but normally, every change includes numerous resources, and caching them as one bit blob helps to avoid a lot of requests.
Here are some references to literature that discusses this:
Steve Souders: High performance web sites, short and a good summary
Steve Souders: High performance web sites the same in a book
Steve Souders: Even faster web sites more advanced, same topic
Cary Millsap: Thinking clearly about performance (first part) more on the server side, but excellent and especially the start very clear.
So, we have a very large and complex website that requires a lot of state information to be placed in the URL. Most of the time, this is just peachy and the app works well. However, there are (an increasing number of) instances where the URL length gets reaaaaallllly long. This causes huge problems in IE because of the URL length restriction.
I'm wondering, what strategies/methods have people used to reduce the length of their URLs? Specifically, I'd just need to reduce certain parameters in the URL, maybe not the entire thing.
In the past, we've pushed some of this state data into session... however this decreases addressability in our application (which is really important). So, any strategy which can maintain addressability would be favored.
Thanks!
Edit: To answer some questions and clarify a little, most of our parameters aren't an issue... however some of them are dynamically generated with the possibility of being very long. These parameters can contain anything legal in a URL (meaning they aren't just numbers or just letters, could be anything). Case sensitivity may or may not matter.
Also, ideally we could convert these to POST, however due to the immense architectural changes required for that, I don't think that is really possible.
If you don't want to store that data in the session scope, you can:
Send the data as a POST parameter (in a hidden field), so data will be sent in the HTTP request body instead of the URL
Store the data in a database and pass a key (that gives you access to the corresponding database record) back and forth, which opens a lot of scalability and maybe security issues. I suppose this approach is similar to use the session scope.
most of our parameters aren't an issue... however some of them are dynamically generated with the possibility of being very long
I don't see a way to get around this if you want to keep full state info in the URL without resorting to storing data in the session, or permanently on server side.
You might save a few bytes using some compression algorithm, but it will make the URLs unreadable, most algorithms are not very efficient on small strings, and compressing does not produce predictable results.
The only other ideas that come to mind are
Shortening parameter names (query => q, page=> p...) might save a few bytes
If the parameter order is very static, using mod_rewritten directory structures /url/param1/param2/param3 may save a few bytes because you don't need to use parameter names
Whatever data is repetitive and can be "shortened" back into numeric IDs or shorter identifiers (like place names of company branches, product names, ...) keep in an internal, global, permanent lookup table (London => 1, Paris => 2...)
Other than that, I think storing data on server side, identified by a random key as #Guido already suggests, is the only real way. The up side is that you have no size limit at all: An URL like
example.com/?key=A23H7230sJFC
can "contain" as much information on server side as you want.
The down side, of course, is that in order for these URLs to work reliably, you'll have to keep the data on your server indefinitely. It's like having your own little URL shortening service... Whether that is an attractive option, will depend on the overall situation.
I think that's pretty much it!
One option which is good when they really are navigatable parameters is to work these parameters into the first section of the URL e.g.
http://example.site.com/ViewPerson.xx?PersonID=123
=>
http://example.site.com/View/Person/123/
If the data in the URL is automatically generated can't you just generate it again when needed?
With little information it is hard to think of a solution but I'd start by researching what RESTful architectures do in terms of using hypermedia (i.e. links) to keep state. REST in Practice (http://tinyurl.com/287r6wk) is a very good book on this very topic.
Not sure what application you are using. I have had the same problem and I use a couple of solutions (ASP.NET):
Use Server.Transfer and HttpContext (PreviousPage in .Net 2+) to get access to a public property of the source page which holds the data.
Use Server.Transfer along with a hidden field in the source page.
Using compression on querystring.