cache multiple resources from a single request - siesta-swift

Let's say I'm making a request for /products.json which returns a JSON array with X number of products. Each is available at /product/[id].json. Is it possible to make siesta cache that information instead of making a request for each product? Or do I have to cache my models separate from their resources?

There’s a short discussion of this here:
https://github.com/bustoutsolutions/siesta/issues/156
As Siesta currently exists, each URL is a separate resource with a separate cached state. However, you can manually propagate changes from an index/list/search resource to the corresponding resources for its individual results:
childResource.addObserver(self)
parentResource.addObserver(owner: self) {
if case .newData = $1 {
childResource.invalidate()
// Delayed refresh prevents redundant load if multiple
// children trigger a refresh on the same parent
DispatchQueue.main.async {
childResource.loadIfNeeded()
}
}
}
That that Github issue discussion for more background.

Related

How to reload a ResourceTable programmatically in Laravel Nova?

I have a custom resource-tool (ledger entry tool) that modifies values of a resource as well as insert additional rows into related resources.
"Account" is the main resources.
"AccountTransaction" and "AccountLog" both get written to when a ledger entry is created. And through events, the account.balance value is updated.
After a successful post of a ledger entry (using Nova.request) in the resource-tool, I would like the new balance value updated in the account detail panel, as well as the new entries in AccountTransaction and AccountLog to be visible.
The simple way would be to simply reload the page, but I am looking for a more elegant solution.
Is it possible to ask these components to refresh themselves from within my resource-tool vue.js component?
Recently had the same issue, until I referred to this block of code
Nova has vuex stores modules, where they have defined storeFilters.
Assigning filters an empty object and then requesting them again "reloads" the resources. Haven't done much more research on this matter, but if you are looking for what I think you are looking for, this should be it.
async reloadResources() {
this.resourceName = this.$router.currentRoute.params.resourceName || this.$router.currentRoute.name;
if (this.resourceName) {
let filters_backup = _.cloneDeep(this.$store.getters[`${this.resourceName}/filters`]);
let filters_to_change = _.cloneDeep(filters_backup);
filters_to_change.push({});
await this.$store.commit(`${this.resourceName}/storeFilters`, filters_to_change);
await this.$store.commit(`${this.resourceName}/storeFilters`, filters_backup);
}
},

AWS DynamoDB queries using Swift

I am using swift and AWS DynamoDB for mobile app. I followed the tutorial and can save data successfully. However , when I try to load data , i found I the saving and loading data always come after all tasks in the viewdidload finished, so I can not pass the data out in the same view? Is there any way to save or retire data immediately ?
below is my code
mapper.query(Table.self, expression: queryExpress).continueWith{(task: AWSTask<AWSDynamoDBPaginatedOutput>!) -> Any? in
print("test")
if let error = task.error as NSError? {
print("The requst failed. Error: \(error)")
}
if let paginatedOutput = task.result {
for item in paginatedOutput.items
{
print("quring")
//pass info out to array
}
}
return nil
}
Fetching data from the network is an asynchronous action. You can't delay loading the screen while it completes. It may take a long time. It might not ever complete.
Your view controller must handle the case that it doesn't have data yet, and update itself when that data becomes available. The first step to this is avoiding making network queries in your view controller. View controllers should never directly query the network. They should query model objects that outlive the view controller. The model objects are responsible for making queries to the network and updating themselves with the results. Then the view controller will update itself based on the model. The name for this pattern is Model View Controller and is fundamental to Cocoa development. (Search around for many tutorials and discussions of this pattern.)
But regardless of where you make the queries and store the data, you will always have to deal with the case where the data is not yet available, and display something in the meantime. Nothing can fix this in a distributed system.
When the query finishes successful, load the data into your view. You can send the query in your viewDidLoad method, but you need to present the data when it arrives using another method you call when the data did arrive.

Dealing with Realm transactions

I have an RESTful API from which I retrieve a large set of data. I am persisting it locally using Realm and the following call:
func addObjectType(object: ObjectType){
// Check for existence of data
if (realm.object(ofType: ObjectType.self, forPrimaryKey: object.id) == nil) {
// Persist your data easily
try! realm.write {
realm.add(object)
}
}
}
The app has a feature to delete the data locally. I have implemented it as following:
func deleteAllData() {
if(!realm.isEmpty){
do{
if(!realm.isInWriteTransaction) {
realm.beginWrite()
realm.deleteAll()
try! realm.commitWrite()
}
}
NotificationCenter.default.post(name: Notification.Name("updateUI"), object: nil)
}
}
However, looking at the Realm documentation I see the following:
Indicates whether the Realm is currently in a write transaction.
Warning
Do not simply check this property and then start a write transaction
whenever an object needs to be created, updated, or removed. Doing so
might cause a large number of write transactions to be created,
degrading performance. Instead, always prefer performing multiple
updates during a single transaction.
Is my implementation correct?
I feel that I am missing some checks..
Realm's general rule-of-thumb is that you should try to minimize as many write transactions as you can. This includes batching together multiple writes inside one block, and trying to avoid transactions all together if the date hasn't actually changed.
Realm write transactions are self-contained on separate threads. If a background thread is performing a write transaction, all other transactions on other threads will be blocked. As a result of that, it's not necessary to check isInWriteTransaction unless a write transaction is open on that particular thread.
So, no, you're not missing any extra checks. As long as you haven't accidentally left a write transaction open somewhere else, you can even reduce the number of checks you've got there. :)

Where to store a Doctrine variable created in a component so that it's accessible anywhere?

Note I am referring to one request, and not several requests and sessions.
I have several components that require Doctrine user object, some are located in layout, others are located in templates. Sometimes I need that Doctrine user object in action. Currently I have added a function to sfUser class that loads that object from database, which means every time I call that function I make a call to db. I'd like to know where to store this object so that I can access it without having to query db every time I need it. Again, we're talking about a single request, not several requests or something that would require session.
Can I save it in sfContext somehow? Any other places so that it can be available everywhere?
You can store it in your model's Table class, because tables are always accessed as singletones.
class sfGuardUserTable extends PluginsfGuardUserTable
{
protected $specialUser = null;
public function getSpecialUser()
{
if (null === $this->specialUser)
{
$this->specialUser = $this->findOneById(1);
}
return $this->specialUser;
}
}
Now, you can use this in actions and components like this:
$u = sfGuardUserTable::getInstance()->getSpecialUser();
And you will always end up with one query.
you can configure Doctrine cache so that the result of this specific request is always cached. What if so good about it is that if you use, say, the APC backend, you will have it cached across requests. You also get query caching as a bonus (this is not result caching, read the link I provided carefully)!

Data Access Layer - static list objects and caching

i am devloping a site using .net MVC
i have a data access layer which basically consists of static list objects that are created from data within my database.
The method that rebuilds this data first clears all the list objects. Once they are empty it then add the data. Here is an example of one of the lists im using. its a method which generates all the UK postcodes. there are about 50 methods similar to this in my application that return all sorts of information, such as towns, regions, members, emails etc.
public static List<PostCode> AllPostCodes = new List<PostCode>();
when the rebuild method is called it first clears the list.
ListPostCodes.AllPostCodes.Clear();
next it re-bulilds the data, by calling the GetAllPostCodes() method
/// <summary>
/// static method that returns all the UK postcodes
/// </summary>
public static void GetAllPostCodes()
{
using (fab_dataContextDataContext db = new fab_dataContextDataContext())
{
IQueryable AllPostcodeData = from data in db.PostCodeTables select data;
IDbCommand cmd = db.GetCommand(AllPostcodeData);
SqlDataAdapter adapter = new SqlDataAdapter();
adapter.SelectCommand = (SqlCommand)cmd;
DataSet dataSet = new DataSet();
cmd.Connection.Open();
adapter.FillSchema(dataSet, SchemaType.Source);
adapter.Fill(dataSet);
cmd.Connection.Close();
// crete the objects
foreach (DataRow row in dataSet.Tables[0].Rows)
{
PostCode postcode = new PostCode();
postcode.ID = Convert.ToInt32(row["PostcodeID"]);
postcode.Outcode = row["OutCode"].ToString();
postcode.Latitude = Convert.ToDouble(row["Latitude"]);
postcode.Longitude = Convert.ToDouble(row["Longitude"]);
postcode.TownID = Convert.ToInt32(row["TownID"]);
AllPostCodes.Add(postcode);
postcode = null;
}
}
}
The rebuild occurs every 1 hour. this ensures that every 1 hour the site will have fresh set of cached data.
the issue ive got is that occasionally if during a rebuild, the server will be hit by a request and an exception is thrown. The exception is "Index was outside the bounds of the array." it is due to when a list is being cleared.
ListPostCodes.AllPostCodes.Clear(); - // throws exception - although its not always in regard to this list.
Once this exception is thrown application dies, All users are affected. I have to restart the server to fix it.
i have 2 questions...
If i utilise caching instead of static objects would this help ?
Is there any way i can say "while the rebuild is taking place, wait for it to complete until accepting requests"
any help is most appricaiated ;)
truegilly
1 If i utilise caching instead of
static objects would this help ?
Yes, all the things you do are easier done by the caching functionality that is build into ASP.NET
Is there any way i can say "while the
rebuild is taking place, wait for it
to complete until accepting requests"
The common pattern goes like this:
You request data from the Data layer
If the Datlayer sees that there is data in the cache, then it serves the data from cache
If no data is in the cache the data is requested from the db and put into cache. After that it is served to the client
There are rules (CacheDependency and Timeout) when the cache is to be cleared.
The easiest solution would be you stick to this pattern: This way the first request would hit the database and other requests get served from the cache. You trigger the refresh by implementing an SQLCacheDependency
You have to make sure that your list is not modified by one thread while other threads are trying to use it. This would be a problem even if you used the ASP.NET cache since collections are just not thread-safe. One way you can do this is by using a SynchronizedCollection instead of a List. Then make sure to use code like the following when you access the collection:
lock (synchronizedCollection.SyncRoot) {
synchronizedCollection.Clear();
etc...
}
You will also have to use locking when you read the collection. If you are enumerating over it, you should probably make a copy before doing so as you don't want to lock for a long time. For example:
List<whatever> tempCollection;
lock (synchrnonizedCollection.SyncRoot) {
tempCollection = new List<whatever>(synchronizedCollection);
}
//use temp collection to access cached data
The other option would be to create a ThreadSafeList class that uses locking internally to make the list object itself thread-safe.
I agree with Tom, you will have to do synchronization to make this work. One thing that would improve the performance is not clearing the list until you actually receive the new values from the database:
// Modify your function to return a new list instead of filling the existing one.
public static List<PostCode> GetAllPostCodes()
{
List<PostCode> temp = new List<PostCode>();
...
return temp;
}
And when you rebuild the data:
List<PostCode> temp = GetAllPostCodes();
AllPostCodes = temp;
This makes sure that your cached list is still valid while GetAllPostCodes() is executing. It also has the advantage that you can use a read-only list which makes the synchronization a bit easier.
In your case you need to refresh the data every one hour.
1) IT should use cache with absolute expiration set to 1 hour, so it expires after every 1 hour. Check the Cache before using it, by doing a NULL check.If its NULL get the data from DB and populate the Cache.
2) With above approach the disadvantage is that data can be stale by 1 hour. So if u want most updated data at all times, use SQLCacheDependency (PUSH). so whenever there is a change in the select command u r using, cache will be refreshed from the database with updated data.

Resources