Ignite - External Storage - reload didn't work - storage

In my application I have cache that fills up from External Storage (Postgres database). All configurations are similar to examples in developer guide.
I load data using this code:
ignite.cache("ProductCache").loadCache(null);
and it works okey.
But if I change data in External Storage (insert/update/delete) and then again load data with the same code thats changes do not appear in cache.
But if I restart application and load data it's all rigth - I see changes.
Why?

You can do ignite.cache.clear() first, then loadCache(null). Otherwise, Ignite will not overwrite entries which are already present (by key).

Related

Confused about parse local datastore & cache

I’m developing a iOS App and I want to have a level of offline support and I’m struggling out of local datastore or cache which approach to use as It appears that you can’t use these two feature together.
My query is quite basic and doesn’t change only the data that is retrieved can change.
if i used one of the cache policies, i get connection errors and nothing appears to be returned from the cache.
The workflow i’m after is on the lines of below.
->When connected to the internet perform query and store objects locally.
->if there is no internet retrieve previously downloaded objects.
For the workflow you describe I think you're looking for a cache. If you would like the user could modify the data without connection and then, when there is wifi again, synchronise the local data with the remote data then you'll need the local datastore behavior.
The problem for me is when you want both in different parts of the same app because in parse in you use local datastore you can't use the cache. I don't really understand why!

How to persist Firebase objects to disk in iOS?

It seems that Firebase iOS implementation doesn't support offline caching of the client model. What this means in practice that:
For Firebase apps requiring an authentication, you need to first authenticate and wait Firebase finish the login (check the user identity, open a socket, etc.) before you can start moving the data. This will require 1-8 seconds (usually 2-5) depending on the network conditions, at least here in Finland.
After authenticating, Firebase first downloads the initial set of data and initializes the client cache. The time to perform this depends on the size of the data you add listeners for, but it's usually quite fast.
The problem here is that if you're using Firebase to implement, for example a messaging app, you'd most likely want to show the user a previously cached version of the message threads and messages, before the actual connection with the backend server is established.
I'd assume the correct implementation for this would need to handle:
The client-side model <-> Firebase JSON mapping (I use Mantle for
this)
Persisting the client-side model to disk (manual implementation using NSKeyedArchiver, or Core Data or such?)
Synchronizing the on-disk model with the Firebase-linked model in memory, when the connection is available (manual implementation?)
Has anyone come up with a solution (own or 3rd party) to achieve 2) and 3)?
It seems Firebase has solved this problem since this question was asked. There are a lot of resources on Offline Capabilities now with Firebase, including disk persistence.
For me, turning on persistence was as simple as the following in my AppDelegate:
Firebase.defaultConfig().persistenceEnabled = true
Assuming your app has been run with an internet connection at least once, this should work well in loading the latest local copy of your data.
There is a beta version of this technology within the client for iOS described here: https://groups.google.com/forum/#!topic/firebase-talk/0evB8s5ELmw give it a go and let the group know how it goes.
Just one line required for persistence with Firebase in iOS
FIRDatabase.database().persistenceEnabled = true
Can be found here in Firebase Docs

Elmah XML Logging on Load Balanced Environment

We're implementing Elmah for an internal application. For development and testing we use a single server instance but on the production environment the app is delivered using a load balanced environment.
Everything works as charm using Elmah, except for the fact that the logs are done independant in each server. What I mean with this is that if an error happens in Server1 the xml file is stored physically on that server and the same for Server2, since I'm storing that files on the App_Data
When I access the axd location to see the error list, I just see the ones of the server that happened to attend my request.
Is there any way to consolidate the xml files other than putting them on a shared folder? Having a shared folder will make us to allow the user that executes the application on the server to have access to that separate folder and to be on only one of the servers instead of both.
I cannot use In-Memory or Database logging since FileLog is the only one allowed.
You might consider using ElmahR for this case, since you are not able to implement In-Memory or Database logging. ElmahR will provide you with a central location for the two load balanced servers to send errors to (in addition to logging them locally) via an Http post. Then you can access the ElmahR site for to view an aggregated list. Also, ElmahR is storing the error messages in a SqlServerCE database, so it can persist the error messages it receives.
Keep in mind that if the ElamhR Dashboard app design does not meet your initial needs/desires, it could be modified as needed given that it is an open source project.
Hope this might be a viable option.

How can I pass session information from one screen to another in MVC 3 running on Azure

I have a screen where a user selects database source from a drop down. Once that's selected I would like the information passed onto other screens so the user does not keep having to select.
How can I pass information such as this from one screen to another? Note that the information is just very small things like:
DatasourceID - 2 characters
SubjectID - 2 characters
As I am running on Azure can I assume the best place to store this would be on the client side? I saw one implementation that stored data like this:
Session["abc"] = "def";
if (Session["abc"] != null)
etc ...
Is this the best way or am I missing something. Also how would the above work when the page could be served by different servers each time around? Does the above store information locally?
The Session is stored on the Server Side. Now in Azure you have a few options where exactly it is stored. It depends on what you would like to do with this datasource. If this is something you just need in the following screen, you can store it in TempData which is stored in the session. It is kept there until you read it.
Now you have these options to store the session state:
in Azure AppFabric Cache
in a SQL Azure DB
in blob storage
Azure AppFabric Caching has got a Session provider which is very easy to set up. You can just create a new cache in the Azure portal and get the required web.config entries by clicking the according button on the toolbar. this is also explained in detail here.
Using that you can store things in the Session out of process. The downside is that it's a bit expensive (about 45$/month for a 128 MB cache). So the alternative would be to store session state in SQL Azure. There's a Session provider for SQL Azure.
Here's a link to a great introduction by Scott Hanselman to the ASP.NET universal providers. If you're not using membership, then you just need to setup System.Web.Providers.DefaultSessionStateProvider.
Just make sure you point the connection string to your SQL Azure DB. Note: You must set MultipleActiveResultSets=True in the connection string, so be sure to add it back if you’ve copied the SQL Azure connection string from the portal.
Then there is also a session provider for blog storage in the training Kit, available with a sample app at http://code.msdn.microsoft.com/windowsazure/Windows-Azure-ASPNET-03d5dc14.
I believe it is unsupported by MS.
Hope this helps.

How to use multiple caches in rails?

I have a rails application where I would like to use both memcached and the file store cache, for different purposes.
I want to use the file store cache to keep a large number of pages that don't change often (some not at all) - i.e. page caching - and use memcached for everything else (action and DB caching etc). The reason is that the pages stored on the file store cache are likely to require a large amount of storage, but individually most will be accessed infrequently.
Is this possible to do or will configuring memcached as the cache mean that it is also used for page caching?
As a secondary question, what is a safe way to remove pages from the file store cache in some form of cron job, as there does not seem to be an option to specify ttl for this cache. For example a UNIX find command would quickly find and remove all old pages or pages that haven't been accessed in a long time - is this safe to do given the app server might potentially try to serve one of those pages at the time (tho this is very unlikely)? If not then what is the best way to do this.
If you want to use the filesystem only for page caching and memcached for action and fragment caching, you're fine. Page caching always uses the filesystem. Just remember that page caching bypasses your Rails application, so you can't use it for pages that include content that changes from user to user or for pages that are access controlled with filters.
Regarding the removal of pages, on Unix, a file can be deleted, but it is not actually removed from disk until all open file handles are closed. If the app server has opened the file to serve a request, and the find command deletes it a split-second later, the app server doesn't suddenly get an error when it tries to read.
You could also consider having find delete files based on their last access time, instead of creation or modification, and using a sweeper in your Rails app to delete the cached page when its content is out of date.
A simpler approach may be to use a http cache upstream of your application as your page cache rather than two stores within rails. This way you can use http headers to control the cache behavior, including TTL's. These same limits will also apply to browser's local caches as a nice bonus.
Varnish is about as high performance as it gets, but would require setting up another moving piece in your hosting environment as a proxy. This may still be worthwhile depending on what you're doing.
A simpler approach might be Rack::Cache, which will be easy to set up provided you're using a rack enabled version of rails.

Resources